00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2444 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3705 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.100 Fetching changes from the remote Git repository 00:00:00.103 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.187 > git --version # 'git version 2.39.2' 00:00:00.187 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.036 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.046 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.057 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.057 > git config core.sparsecheckout # timeout=10 00:00:06.068 > git read-tree -mu HEAD # timeout=10 00:00:06.084 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.105 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.105 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.189 [Pipeline] Start of Pipeline 00:00:06.203 [Pipeline] library 00:00:06.204 Loading library shm_lib@master 00:00:06.205 Library shm_lib@master is cached. Copying from home. 00:00:06.218 [Pipeline] node 00:00:06.232 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.234 [Pipeline] { 00:00:06.243 [Pipeline] catchError 00:00:06.244 [Pipeline] { 00:00:06.257 [Pipeline] wrap 00:00:06.265 [Pipeline] { 00:00:06.274 [Pipeline] stage 00:00:06.276 [Pipeline] { (Prologue) 00:00:06.294 [Pipeline] echo 00:00:06.296 Node: VM-host-SM9 00:00:06.303 [Pipeline] cleanWs 00:00:06.312 [WS-CLEANUP] Deleting project workspace... 00:00:06.312 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.318 [WS-CLEANUP] done 00:00:06.513 [Pipeline] setCustomBuildProperty 00:00:06.575 [Pipeline] httpRequest 00:00:07.171 [Pipeline] echo 00:00:07.172 Sorcerer 10.211.164.20 is alive 00:00:07.180 [Pipeline] retry 00:00:07.182 [Pipeline] { 00:00:07.201 [Pipeline] httpRequest 00:00:07.209 HttpMethod: GET 00:00:07.210 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.213 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.214 Response Code: HTTP/1.1 200 OK 00:00:07.215 Success: Status code 200 is in the accepted range: 200,404 00:00:07.216 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.469 [Pipeline] } 00:00:08.485 [Pipeline] // retry 00:00:08.492 [Pipeline] sh 00:00:08.773 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.786 [Pipeline] httpRequest 00:00:09.891 [Pipeline] echo 00:00:09.893 Sorcerer 10.211.164.20 is alive 00:00:09.903 [Pipeline] retry 00:00:09.906 [Pipeline] { 00:00:09.920 [Pipeline] httpRequest 00:00:09.924 HttpMethod: GET 00:00:09.925 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.926 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.945 Response Code: HTTP/1.1 200 OK 00:00:09.946 Success: Status code 200 is in the accepted range: 200,404 00:00:09.946 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:16.478 [Pipeline] } 00:01:16.496 [Pipeline] // retry 00:01:16.504 [Pipeline] sh 00:01:16.783 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:19.329 [Pipeline] sh 00:01:19.609 + git -C spdk log --oneline -n5 00:01:19.609 c13c99a5e test: Various fixes for Fedora40 00:01:19.609 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:19.609 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:19.609 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:19.609 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:19.629 [Pipeline] writeFile 00:01:19.645 [Pipeline] sh 00:01:19.928 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:19.939 [Pipeline] sh 00:01:20.220 + cat autorun-spdk.conf 00:01:20.220 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.220 SPDK_TEST_NVMF=1 00:01:20.220 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.220 SPDK_TEST_URING=1 00:01:20.220 SPDK_TEST_VFIOUSER=1 00:01:20.221 SPDK_TEST_USDT=1 00:01:20.221 SPDK_RUN_UBSAN=1 00:01:20.221 NET_TYPE=virt 00:01:20.221 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.227 RUN_NIGHTLY=1 00:01:20.229 [Pipeline] } 00:01:20.242 [Pipeline] // stage 00:01:20.257 [Pipeline] stage 00:01:20.259 [Pipeline] { (Run VM) 00:01:20.271 [Pipeline] sh 00:01:20.552 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.552 + echo 'Start stage prepare_nvme.sh' 00:01:20.552 Start stage prepare_nvme.sh 00:01:20.552 + [[ -n 1 ]] 00:01:20.552 + disk_prefix=ex1 00:01:20.552 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:20.552 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:20.552 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:20.552 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.552 ++ SPDK_TEST_NVMF=1 00:01:20.552 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.552 ++ SPDK_TEST_URING=1 00:01:20.552 ++ SPDK_TEST_VFIOUSER=1 00:01:20.552 ++ SPDK_TEST_USDT=1 00:01:20.552 ++ SPDK_RUN_UBSAN=1 00:01:20.552 ++ NET_TYPE=virt 00:01:20.552 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.552 ++ RUN_NIGHTLY=1 00:01:20.552 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:20.552 + nvme_files=() 00:01:20.552 + declare -A nvme_files 00:01:20.552 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.552 + nvme_files['nvme.img']=5G 00:01:20.552 + nvme_files['nvme-cmb.img']=5G 00:01:20.552 + nvme_files['nvme-multi0.img']=4G 00:01:20.552 + nvme_files['nvme-multi1.img']=4G 00:01:20.552 + nvme_files['nvme-multi2.img']=4G 00:01:20.552 + nvme_files['nvme-openstack.img']=8G 00:01:20.552 + nvme_files['nvme-zns.img']=5G 00:01:20.552 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.552 + (( SPDK_TEST_FTL == 1 )) 00:01:20.552 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.552 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.552 + for nvme in "${!nvme_files[@]}" 00:01:20.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:20.552 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.552 + for nvme in "${!nvme_files[@]}" 00:01:20.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:20.552 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.552 + for nvme in "${!nvme_files[@]}" 00:01:20.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:20.552 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.552 + for nvme in "${!nvme_files[@]}" 00:01:20.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:20.552 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.552 + for nvme in "${!nvme_files[@]}" 00:01:20.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:20.552 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.552 + for nvme in "${!nvme_files[@]}" 00:01:20.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:20.552 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.552 + for nvme in "${!nvme_files[@]}" 00:01:20.552 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:20.811 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.811 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:20.811 + echo 'End stage prepare_nvme.sh' 00:01:20.811 End stage prepare_nvme.sh 00:01:20.822 [Pipeline] sh 00:01:21.101 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.102 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:21.102 00:01:21.102 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:21.102 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:21.102 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:21.102 HELP=0 00:01:21.102 DRY_RUN=0 00:01:21.102 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:21.102 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.102 NVME_AUTO_CREATE=0 00:01:21.102 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:21.102 NVME_CMB=,, 00:01:21.102 NVME_PMR=,, 00:01:21.102 NVME_ZNS=,, 00:01:21.102 NVME_MS=,, 00:01:21.102 NVME_FDP=,, 00:01:21.102 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.102 SPDK_VAGRANT_VMCPU=10 00:01:21.102 SPDK_VAGRANT_VMRAM=12288 00:01:21.102 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.102 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.102 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.102 SPDK_OPENSTACK_NETWORK=0 00:01:21.102 VAGRANT_PACKAGE_BOX=0 00:01:21.102 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.102 FORCE_DISTRO=true 00:01:21.102 VAGRANT_BOX_VERSION= 00:01:21.102 EXTRA_VAGRANTFILES= 00:01:21.102 NIC_MODEL=e1000 00:01:21.102 00:01:21.102 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:21.102 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:23.637 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.573 ==> default: Creating image (snapshot of base box volume). 00:01:24.573 ==> default: Creating domain with the following settings... 00:01:24.573 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733544987_dab1149c2b1f30b63e12 00:01:24.573 ==> default: -- Domain type: kvm 00:01:24.573 ==> default: -- Cpus: 10 00:01:24.573 ==> default: -- Feature: acpi 00:01:24.573 ==> default: -- Feature: apic 00:01:24.573 ==> default: -- Feature: pae 00:01:24.573 ==> default: -- Memory: 12288M 00:01:24.573 ==> default: -- Memory Backing: hugepages: 00:01:24.573 ==> default: -- Management MAC: 00:01:24.573 ==> default: -- Loader: 00:01:24.573 ==> default: -- Nvram: 00:01:24.573 ==> default: -- Base box: spdk/fedora39 00:01:24.573 ==> default: -- Storage pool: default 00:01:24.573 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733544987_dab1149c2b1f30b63e12.img (20G) 00:01:24.573 ==> default: -- Volume Cache: default 00:01:24.573 ==> default: -- Kernel: 00:01:24.573 ==> default: -- Initrd: 00:01:24.573 ==> default: -- Graphics Type: vnc 00:01:24.573 ==> default: -- Graphics Port: -1 00:01:24.573 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.573 ==> default: -- Graphics Password: Not defined 00:01:24.573 ==> default: -- Video Type: cirrus 00:01:24.573 ==> default: -- Video VRAM: 9216 00:01:24.573 ==> default: -- Sound Type: 00:01:24.573 ==> default: -- Keymap: en-us 00:01:24.573 ==> default: -- TPM Path: 00:01:24.573 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.573 ==> default: -- Command line args: 00:01:24.573 ==> default: -> value=-device, 00:01:24.573 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:24.573 ==> default: -> value=-drive, 00:01:24.573 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:24.573 ==> default: -> value=-device, 00:01:24.573 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.573 ==> default: -> value=-device, 00:01:24.574 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:24.574 ==> default: -> value=-drive, 00:01:24.574 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:24.574 ==> default: -> value=-device, 00:01:24.574 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.574 ==> default: -> value=-drive, 00:01:24.574 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:24.574 ==> default: -> value=-device, 00:01:24.574 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.574 ==> default: -> value=-drive, 00:01:24.574 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:24.574 ==> default: -> value=-device, 00:01:24.574 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.574 ==> default: Creating shared folders metadata... 00:01:24.574 ==> default: Starting domain. 00:01:25.953 ==> default: Waiting for domain to get an IP address... 00:01:44.040 ==> default: Waiting for SSH to become available... 00:01:44.040 ==> default: Configuring and enabling network interfaces... 00:01:46.589 default: SSH address: 192.168.121.58:22 00:01:46.589 default: SSH username: vagrant 00:01:46.589 default: SSH auth method: private key 00:01:48.503 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.609 ==> default: Mounting SSHFS shared folder... 00:01:57.541 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.541 ==> default: Checking Mount.. 00:01:58.916 ==> default: Folder Successfully Mounted! 00:01:58.916 ==> default: Running provisioner: file... 00:01:59.513 default: ~/.gitconfig => .gitconfig 00:02:00.100 00:02:00.100 SUCCESS! 00:02:00.100 00:02:00.100 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.100 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.100 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.100 00:02:00.109 [Pipeline] } 00:02:00.124 [Pipeline] // stage 00:02:00.134 [Pipeline] dir 00:02:00.135 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:00.137 [Pipeline] { 00:02:00.152 [Pipeline] catchError 00:02:00.154 [Pipeline] { 00:02:00.168 [Pipeline] sh 00:02:00.447 + vagrant ssh-config --host vagrant 00:02:00.447 + sed -ne /^Host/,$p 00:02:00.447 + tee ssh_conf 00:02:04.636 Host vagrant 00:02:04.636 HostName 192.168.121.58 00:02:04.636 User vagrant 00:02:04.636 Port 22 00:02:04.636 UserKnownHostsFile /dev/null 00:02:04.636 StrictHostKeyChecking no 00:02:04.636 PasswordAuthentication no 00:02:04.636 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.636 IdentitiesOnly yes 00:02:04.636 LogLevel FATAL 00:02:04.636 ForwardAgent yes 00:02:04.636 ForwardX11 yes 00:02:04.636 00:02:04.651 [Pipeline] withEnv 00:02:04.653 [Pipeline] { 00:02:04.667 [Pipeline] sh 00:02:04.948 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:04.948 source /etc/os-release 00:02:04.948 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.948 # Minimal, systemd-like check. 00:02:04.948 if [[ -e /.dockerenv ]]; then 00:02:04.948 # Clear garbage from the node's name: 00:02:04.948 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.948 # $HOSTNAME is the actual container id 00:02:04.948 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.948 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.948 # We can assume this is a mount from a host where container is running, 00:02:04.948 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.948 container="$(< /etc/hostname) ($agent)" 00:02:04.948 else 00:02:04.948 # Fallback 00:02:04.948 container=$agent 00:02:04.948 fi 00:02:04.948 fi 00:02:04.948 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.948 00:02:04.959 [Pipeline] } 00:02:04.977 [Pipeline] // withEnv 00:02:04.986 [Pipeline] setCustomBuildProperty 00:02:05.002 [Pipeline] stage 00:02:05.004 [Pipeline] { (Tests) 00:02:05.022 [Pipeline] sh 00:02:05.304 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.318 [Pipeline] sh 00:02:05.600 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.874 [Pipeline] timeout 00:02:05.875 Timeout set to expire in 1 hr 0 min 00:02:05.877 [Pipeline] { 00:02:05.891 [Pipeline] sh 00:02:06.171 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.739 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:06.752 [Pipeline] sh 00:02:07.033 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.307 [Pipeline] sh 00:02:07.589 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.866 [Pipeline] sh 00:02:08.146 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:08.405 ++ readlink -f spdk_repo 00:02:08.405 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.405 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.405 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.405 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.405 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.405 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.405 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.405 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:08.405 + cd /home/vagrant/spdk_repo 00:02:08.405 + source /etc/os-release 00:02:08.405 ++ NAME='Fedora Linux' 00:02:08.405 ++ VERSION='39 (Cloud Edition)' 00:02:08.405 ++ ID=fedora 00:02:08.405 ++ VERSION_ID=39 00:02:08.405 ++ VERSION_CODENAME= 00:02:08.405 ++ PLATFORM_ID=platform:f39 00:02:08.405 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.405 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.405 ++ LOGO=fedora-logo-icon 00:02:08.405 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.405 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.405 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.405 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.405 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.405 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.405 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.405 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.405 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.405 ++ SUPPORT_END=2024-11-12 00:02:08.405 ++ VARIANT='Cloud Edition' 00:02:08.405 ++ VARIANT_ID=cloud 00:02:08.405 + uname -a 00:02:08.405 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.405 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.405 Hugepages 00:02:08.405 node hugesize free / total 00:02:08.405 node0 1048576kB 0 / 0 00:02:08.405 node0 2048kB 0 / 0 00:02:08.405 00:02:08.405 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.405 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:08.405 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:08.405 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:08.405 + rm -f /tmp/spdk-ld-path 00:02:08.405 + source autorun-spdk.conf 00:02:08.405 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.405 ++ SPDK_TEST_NVMF=1 00:02:08.405 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.405 ++ SPDK_TEST_URING=1 00:02:08.405 ++ SPDK_TEST_VFIOUSER=1 00:02:08.405 ++ SPDK_TEST_USDT=1 00:02:08.405 ++ SPDK_RUN_UBSAN=1 00:02:08.405 ++ NET_TYPE=virt 00:02:08.405 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.405 ++ RUN_NIGHTLY=1 00:02:08.405 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.405 + [[ -n '' ]] 00:02:08.405 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.405 + for M in /var/spdk/build-*-manifest.txt 00:02:08.405 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.405 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.405 + for M in /var/spdk/build-*-manifest.txt 00:02:08.405 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.405 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.664 + for M in /var/spdk/build-*-manifest.txt 00:02:08.664 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.664 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.664 ++ uname 00:02:08.664 + [[ Linux == \L\i\n\u\x ]] 00:02:08.664 + sudo dmesg -T 00:02:08.664 + sudo dmesg --clear 00:02:08.664 + dmesg_pid=5231 00:02:08.664 + [[ Fedora Linux == FreeBSD ]] 00:02:08.664 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.664 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.664 + sudo dmesg -Tw 00:02:08.664 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.664 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.664 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.664 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.664 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.664 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.664 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.664 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.664 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.664 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.664 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.664 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.664 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.664 Test configuration: 00:02:08.664 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.664 SPDK_TEST_NVMF=1 00:02:08.664 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.664 SPDK_TEST_URING=1 00:02:08.664 SPDK_TEST_VFIOUSER=1 00:02:08.664 SPDK_TEST_USDT=1 00:02:08.664 SPDK_RUN_UBSAN=1 00:02:08.664 NET_TYPE=virt 00:02:08.664 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.664 RUN_NIGHTLY=1 04:17:11 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:08.664 04:17:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.664 04:17:11 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.664 04:17:11 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.664 04:17:11 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.664 04:17:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.664 04:17:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.664 04:17:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.664 04:17:11 -- paths/export.sh@5 -- $ export PATH 00:02:08.664 04:17:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.664 04:17:11 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.664 04:17:11 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:08.664 04:17:11 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733545031.XXXXXX 00:02:08.664 04:17:11 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733545031.pwMIFQ 00:02:08.664 04:17:11 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:08.664 04:17:11 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:08.664 04:17:11 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.664 04:17:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.664 04:17:11 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.664 04:17:11 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:08.664 04:17:11 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:08.664 04:17:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.664 04:17:11 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:08.664 04:17:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.664 04:17:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.664 04:17:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.664 04:17:11 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.664 Sat Dec 7 04:17:11 AM UTC 2024 00:02:08.664 04:17:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.664 LTS-67-gc13c99a5e 00:02:08.664 04:17:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.664 04:17:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.664 04:17:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.664 04:17:11 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:08.664 04:17:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:08.664 04:17:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.664 ************************************ 00:02:08.664 START TEST ubsan 00:02:08.664 ************************************ 00:02:08.664 using ubsan 00:02:08.664 04:17:11 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:08.664 00:02:08.664 real 0m0.000s 00:02:08.664 user 0m0.000s 00:02:08.664 sys 0m0.000s 00:02:08.664 04:17:11 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:08.664 ************************************ 00:02:08.664 END TEST ubsan 00:02:08.664 ************************************ 00:02:08.664 04:17:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.664 04:17:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.664 04:17:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.664 04:17:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.664 04:17:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.664 04:17:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.664 04:17:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.665 04:17:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.665 04:17:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.665 04:17:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:08.923 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:08.923 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.498 Using 'verbs' RDMA provider 00:02:24.952 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:34.926 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:35.186 Creating mk/config.mk...done. 00:02:35.186 Creating mk/cc.flags.mk...done. 00:02:35.186 Type 'make' to build. 00:02:35.186 04:17:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:35.186 04:17:38 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:35.186 04:17:38 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:35.186 04:17:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.186 ************************************ 00:02:35.186 START TEST make 00:02:35.186 ************************************ 00:02:35.186 04:17:38 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:35.445 make[1]: Nothing to be done for 'all'. 00:02:36.822 The Meson build system 00:02:36.822 Version: 1.5.0 00:02:36.822 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:36.822 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:36.822 Build type: native build 00:02:36.822 Project name: libvfio-user 00:02:36.822 Project version: 0.0.1 00:02:36.822 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.822 C linker for the host machine: cc ld.bfd 2.40-14 00:02:36.822 Host machine cpu family: x86_64 00:02:36.822 Host machine cpu: x86_64 00:02:36.822 Run-time dependency threads found: YES 00:02:36.822 Library dl found: YES 00:02:36.822 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.822 Run-time dependency json-c found: YES 0.17 00:02:36.822 Run-time dependency cmocka found: YES 1.1.7 00:02:36.822 Program pytest-3 found: NO 00:02:36.822 Program flake8 found: NO 00:02:36.822 Program misspell-fixer found: NO 00:02:36.822 Program restructuredtext-lint found: NO 00:02:36.822 Program valgrind found: YES (/usr/bin/valgrind) 00:02:36.822 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.822 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.822 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.822 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.822 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:36.822 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:36.822 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.822 Build targets in project: 8 00:02:36.822 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:36.822 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:36.822 00:02:36.822 libvfio-user 0.0.1 00:02:36.822 00:02:36.822 User defined options 00:02:36.822 buildtype : debug 00:02:36.822 default_library: shared 00:02:36.822 libdir : /usr/local/lib 00:02:36.822 00:02:36.822 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.388 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:37.388 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:37.388 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:37.388 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:37.388 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:37.388 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:37.388 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:37.388 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:37.388 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:37.388 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:37.388 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:37.388 [11/37] Compiling C object samples/null.p/null.c.o 00:02:37.388 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:37.647 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:37.647 [14/37] Compiling C object samples/server.p/server.c.o 00:02:37.647 [15/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:37.647 [16/37] Compiling C object samples/client.p/client.c.o 00:02:37.647 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:37.647 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:37.647 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:37.647 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:37.647 [21/37] Linking target samples/client 00:02:37.647 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:37.647 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:37.647 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:37.647 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:37.647 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:37.647 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:37.647 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:37.647 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:37.905 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:37.905 [31/37] Linking target test/unit_tests 00:02:37.905 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:37.905 [33/37] Linking target samples/gpio-pci-idio-16 00:02:37.905 [34/37] Linking target samples/server 00:02:37.905 [35/37] Linking target samples/lspci 00:02:37.905 [36/37] Linking target samples/null 00:02:37.905 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:37.905 INFO: autodetecting backend as ninja 00:02:37.905 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.163 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.422 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:38.422 ninja: no work to do. 00:02:48.394 The Meson build system 00:02:48.394 Version: 1.5.0 00:02:48.394 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:48.394 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:48.394 Build type: native build 00:02:48.394 Program cat found: YES (/usr/bin/cat) 00:02:48.394 Project name: DPDK 00:02:48.394 Project version: 23.11.0 00:02:48.394 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.394 C linker for the host machine: cc ld.bfd 2.40-14 00:02:48.394 Host machine cpu family: x86_64 00:02:48.394 Host machine cpu: x86_64 00:02:48.394 Message: ## Building in Developer Mode ## 00:02:48.394 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.394 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.394 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.394 Program python3 found: YES (/usr/bin/python3) 00:02:48.394 Program cat found: YES (/usr/bin/cat) 00:02:48.394 Compiler for C supports arguments -march=native: YES 00:02:48.394 Checking for size of "void *" : 8 00:02:48.394 Checking for size of "void *" : 8 (cached) 00:02:48.394 Library m found: YES 00:02:48.394 Library numa found: YES 00:02:48.394 Has header "numaif.h" : YES 00:02:48.394 Library fdt found: NO 00:02:48.394 Library execinfo found: NO 00:02:48.394 Has header "execinfo.h" : YES 00:02:48.394 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.394 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.394 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.394 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.394 Run-time dependency openssl found: YES 3.1.1 00:02:48.394 Run-time dependency libpcap found: YES 1.10.4 00:02:48.394 Has header "pcap.h" with dependency libpcap: YES 00:02:48.394 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.394 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.394 Compiler for C supports arguments -Wformat: YES 00:02:48.394 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.394 Compiler for C supports arguments -Wformat-security: NO 00:02:48.394 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.394 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.394 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.394 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.394 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.394 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.394 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.394 Compiler for C supports arguments -Wundef: YES 00:02:48.394 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.394 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.394 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.394 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.394 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.394 Program objdump found: YES (/usr/bin/objdump) 00:02:48.394 Compiler for C supports arguments -mavx512f: YES 00:02:48.394 Checking if "AVX512 checking" compiles: YES 00:02:48.394 Fetching value of define "__SSE4_2__" : 1 00:02:48.394 Fetching value of define "__AES__" : 1 00:02:48.394 Fetching value of define "__AVX__" : 1 00:02:48.394 Fetching value of define "__AVX2__" : 1 00:02:48.394 Fetching value of define "__AVX512BW__" : (undefined) 00:02:48.394 Fetching value of define "__AVX512CD__" : (undefined) 00:02:48.394 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:48.394 Fetching value of define "__AVX512F__" : (undefined) 00:02:48.394 Fetching value of define "__AVX512VL__" : (undefined) 00:02:48.394 Fetching value of define "__PCLMUL__" : 1 00:02:48.394 Fetching value of define "__RDRND__" : 1 00:02:48.394 Fetching value of define "__RDSEED__" : 1 00:02:48.394 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:48.394 Fetching value of define "__znver1__" : (undefined) 00:02:48.394 Fetching value of define "__znver2__" : (undefined) 00:02:48.394 Fetching value of define "__znver3__" : (undefined) 00:02:48.394 Fetching value of define "__znver4__" : (undefined) 00:02:48.394 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.394 Message: lib/log: Defining dependency "log" 00:02:48.394 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.394 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.394 Checking for function "getentropy" : NO 00:02:48.394 Message: lib/eal: Defining dependency "eal" 00:02:48.394 Message: lib/ring: Defining dependency "ring" 00:02:48.394 Message: lib/rcu: Defining dependency "rcu" 00:02:48.394 Message: lib/mempool: Defining dependency "mempool" 00:02:48.394 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.394 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.394 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:48.394 Compiler for C supports arguments -mpclmul: YES 00:02:48.394 Compiler for C supports arguments -maes: YES 00:02:48.394 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.394 Compiler for C supports arguments -mavx512bw: YES 00:02:48.394 Compiler for C supports arguments -mavx512dq: YES 00:02:48.394 Compiler for C supports arguments -mavx512vl: YES 00:02:48.394 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.394 Compiler for C supports arguments -mavx2: YES 00:02:48.394 Compiler for C supports arguments -mavx: YES 00:02:48.394 Message: lib/net: Defining dependency "net" 00:02:48.394 Message: lib/meter: Defining dependency "meter" 00:02:48.394 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.394 Message: lib/pci: Defining dependency "pci" 00:02:48.394 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.394 Message: lib/hash: Defining dependency "hash" 00:02:48.394 Message: lib/timer: Defining dependency "timer" 00:02:48.394 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.394 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.394 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.394 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.394 Message: lib/power: Defining dependency "power" 00:02:48.394 Message: lib/reorder: Defining dependency "reorder" 00:02:48.394 Message: lib/security: Defining dependency "security" 00:02:48.394 Has header "linux/userfaultfd.h" : YES 00:02:48.394 Has header "linux/vduse.h" : YES 00:02:48.394 Message: lib/vhost: Defining dependency "vhost" 00:02:48.394 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.394 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.394 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.395 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.395 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.395 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.395 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.395 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.395 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.395 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.395 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:48.395 Configuring doxy-api-html.conf using configuration 00:02:48.395 Configuring doxy-api-man.conf using configuration 00:02:48.395 Program mandb found: YES (/usr/bin/mandb) 00:02:48.395 Program sphinx-build found: NO 00:02:48.395 Configuring rte_build_config.h using configuration 00:02:48.395 Message: 00:02:48.395 ================= 00:02:48.395 Applications Enabled 00:02:48.395 ================= 00:02:48.395 00:02:48.395 apps: 00:02:48.395 00:02:48.395 00:02:48.395 Message: 00:02:48.395 ================= 00:02:48.395 Libraries Enabled 00:02:48.395 ================= 00:02:48.395 00:02:48.395 libs: 00:02:48.395 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.395 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.395 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.395 00:02:48.395 Message: 00:02:48.395 =============== 00:02:48.395 Drivers Enabled 00:02:48.395 =============== 00:02:48.395 00:02:48.395 common: 00:02:48.395 00:02:48.395 bus: 00:02:48.395 pci, vdev, 00:02:48.395 mempool: 00:02:48.395 ring, 00:02:48.395 dma: 00:02:48.395 00:02:48.395 net: 00:02:48.395 00:02:48.395 crypto: 00:02:48.395 00:02:48.395 compress: 00:02:48.395 00:02:48.395 vdpa: 00:02:48.395 00:02:48.395 00:02:48.395 Message: 00:02:48.395 ================= 00:02:48.395 Content Skipped 00:02:48.395 ================= 00:02:48.395 00:02:48.395 apps: 00:02:48.395 dumpcap: explicitly disabled via build config 00:02:48.395 graph: explicitly disabled via build config 00:02:48.395 pdump: explicitly disabled via build config 00:02:48.395 proc-info: explicitly disabled via build config 00:02:48.395 test-acl: explicitly disabled via build config 00:02:48.395 test-bbdev: explicitly disabled via build config 00:02:48.395 test-cmdline: explicitly disabled via build config 00:02:48.395 test-compress-perf: explicitly disabled via build config 00:02:48.395 test-crypto-perf: explicitly disabled via build config 00:02:48.395 test-dma-perf: explicitly disabled via build config 00:02:48.395 test-eventdev: explicitly disabled via build config 00:02:48.395 test-fib: explicitly disabled via build config 00:02:48.395 test-flow-perf: explicitly disabled via build config 00:02:48.395 test-gpudev: explicitly disabled via build config 00:02:48.395 test-mldev: explicitly disabled via build config 00:02:48.395 test-pipeline: explicitly disabled via build config 00:02:48.395 test-pmd: explicitly disabled via build config 00:02:48.395 test-regex: explicitly disabled via build config 00:02:48.395 test-sad: explicitly disabled via build config 00:02:48.395 test-security-perf: explicitly disabled via build config 00:02:48.395 00:02:48.395 libs: 00:02:48.395 metrics: explicitly disabled via build config 00:02:48.395 acl: explicitly disabled via build config 00:02:48.395 bbdev: explicitly disabled via build config 00:02:48.395 bitratestats: explicitly disabled via build config 00:02:48.395 bpf: explicitly disabled via build config 00:02:48.395 cfgfile: explicitly disabled via build config 00:02:48.395 distributor: explicitly disabled via build config 00:02:48.395 efd: explicitly disabled via build config 00:02:48.395 eventdev: explicitly disabled via build config 00:02:48.395 dispatcher: explicitly disabled via build config 00:02:48.395 gpudev: explicitly disabled via build config 00:02:48.395 gro: explicitly disabled via build config 00:02:48.395 gso: explicitly disabled via build config 00:02:48.395 ip_frag: explicitly disabled via build config 00:02:48.395 jobstats: explicitly disabled via build config 00:02:48.395 latencystats: explicitly disabled via build config 00:02:48.395 lpm: explicitly disabled via build config 00:02:48.395 member: explicitly disabled via build config 00:02:48.395 pcapng: explicitly disabled via build config 00:02:48.395 rawdev: explicitly disabled via build config 00:02:48.395 regexdev: explicitly disabled via build config 00:02:48.395 mldev: explicitly disabled via build config 00:02:48.395 rib: explicitly disabled via build config 00:02:48.395 sched: explicitly disabled via build config 00:02:48.395 stack: explicitly disabled via build config 00:02:48.395 ipsec: explicitly disabled via build config 00:02:48.395 pdcp: explicitly disabled via build config 00:02:48.395 fib: explicitly disabled via build config 00:02:48.395 port: explicitly disabled via build config 00:02:48.395 pdump: explicitly disabled via build config 00:02:48.395 table: explicitly disabled via build config 00:02:48.395 pipeline: explicitly disabled via build config 00:02:48.395 graph: explicitly disabled via build config 00:02:48.395 node: explicitly disabled via build config 00:02:48.395 00:02:48.395 drivers: 00:02:48.395 common/cpt: not in enabled drivers build config 00:02:48.395 common/dpaax: not in enabled drivers build config 00:02:48.395 common/iavf: not in enabled drivers build config 00:02:48.395 common/idpf: not in enabled drivers build config 00:02:48.395 common/mvep: not in enabled drivers build config 00:02:48.395 common/octeontx: not in enabled drivers build config 00:02:48.395 bus/auxiliary: not in enabled drivers build config 00:02:48.395 bus/cdx: not in enabled drivers build config 00:02:48.395 bus/dpaa: not in enabled drivers build config 00:02:48.395 bus/fslmc: not in enabled drivers build config 00:02:48.395 bus/ifpga: not in enabled drivers build config 00:02:48.395 bus/platform: not in enabled drivers build config 00:02:48.395 bus/vmbus: not in enabled drivers build config 00:02:48.395 common/cnxk: not in enabled drivers build config 00:02:48.395 common/mlx5: not in enabled drivers build config 00:02:48.395 common/nfp: not in enabled drivers build config 00:02:48.395 common/qat: not in enabled drivers build config 00:02:48.395 common/sfc_efx: not in enabled drivers build config 00:02:48.395 mempool/bucket: not in enabled drivers build config 00:02:48.395 mempool/cnxk: not in enabled drivers build config 00:02:48.395 mempool/dpaa: not in enabled drivers build config 00:02:48.395 mempool/dpaa2: not in enabled drivers build config 00:02:48.395 mempool/octeontx: not in enabled drivers build config 00:02:48.395 mempool/stack: not in enabled drivers build config 00:02:48.395 dma/cnxk: not in enabled drivers build config 00:02:48.395 dma/dpaa: not in enabled drivers build config 00:02:48.395 dma/dpaa2: not in enabled drivers build config 00:02:48.395 dma/hisilicon: not in enabled drivers build config 00:02:48.395 dma/idxd: not in enabled drivers build config 00:02:48.395 dma/ioat: not in enabled drivers build config 00:02:48.395 dma/skeleton: not in enabled drivers build config 00:02:48.395 net/af_packet: not in enabled drivers build config 00:02:48.395 net/af_xdp: not in enabled drivers build config 00:02:48.395 net/ark: not in enabled drivers build config 00:02:48.395 net/atlantic: not in enabled drivers build config 00:02:48.395 net/avp: not in enabled drivers build config 00:02:48.395 net/axgbe: not in enabled drivers build config 00:02:48.395 net/bnx2x: not in enabled drivers build config 00:02:48.395 net/bnxt: not in enabled drivers build config 00:02:48.395 net/bonding: not in enabled drivers build config 00:02:48.395 net/cnxk: not in enabled drivers build config 00:02:48.395 net/cpfl: not in enabled drivers build config 00:02:48.395 net/cxgbe: not in enabled drivers build config 00:02:48.395 net/dpaa: not in enabled drivers build config 00:02:48.395 net/dpaa2: not in enabled drivers build config 00:02:48.395 net/e1000: not in enabled drivers build config 00:02:48.395 net/ena: not in enabled drivers build config 00:02:48.395 net/enetc: not in enabled drivers build config 00:02:48.395 net/enetfec: not in enabled drivers build config 00:02:48.395 net/enic: not in enabled drivers build config 00:02:48.395 net/failsafe: not in enabled drivers build config 00:02:48.395 net/fm10k: not in enabled drivers build config 00:02:48.395 net/gve: not in enabled drivers build config 00:02:48.395 net/hinic: not in enabled drivers build config 00:02:48.395 net/hns3: not in enabled drivers build config 00:02:48.395 net/i40e: not in enabled drivers build config 00:02:48.395 net/iavf: not in enabled drivers build config 00:02:48.395 net/ice: not in enabled drivers build config 00:02:48.395 net/idpf: not in enabled drivers build config 00:02:48.395 net/igc: not in enabled drivers build config 00:02:48.395 net/ionic: not in enabled drivers build config 00:02:48.395 net/ipn3ke: not in enabled drivers build config 00:02:48.395 net/ixgbe: not in enabled drivers build config 00:02:48.395 net/mana: not in enabled drivers build config 00:02:48.395 net/memif: not in enabled drivers build config 00:02:48.395 net/mlx4: not in enabled drivers build config 00:02:48.395 net/mlx5: not in enabled drivers build config 00:02:48.395 net/mvneta: not in enabled drivers build config 00:02:48.395 net/mvpp2: not in enabled drivers build config 00:02:48.395 net/netvsc: not in enabled drivers build config 00:02:48.395 net/nfb: not in enabled drivers build config 00:02:48.395 net/nfp: not in enabled drivers build config 00:02:48.395 net/ngbe: not in enabled drivers build config 00:02:48.395 net/null: not in enabled drivers build config 00:02:48.395 net/octeontx: not in enabled drivers build config 00:02:48.395 net/octeon_ep: not in enabled drivers build config 00:02:48.395 net/pcap: not in enabled drivers build config 00:02:48.395 net/pfe: not in enabled drivers build config 00:02:48.395 net/qede: not in enabled drivers build config 00:02:48.395 net/ring: not in enabled drivers build config 00:02:48.395 net/sfc: not in enabled drivers build config 00:02:48.395 net/softnic: not in enabled drivers build config 00:02:48.395 net/tap: not in enabled drivers build config 00:02:48.395 net/thunderx: not in enabled drivers build config 00:02:48.396 net/txgbe: not in enabled drivers build config 00:02:48.396 net/vdev_netvsc: not in enabled drivers build config 00:02:48.396 net/vhost: not in enabled drivers build config 00:02:48.396 net/virtio: not in enabled drivers build config 00:02:48.396 net/vmxnet3: not in enabled drivers build config 00:02:48.396 raw/*: missing internal dependency, "rawdev" 00:02:48.396 crypto/armv8: not in enabled drivers build config 00:02:48.396 crypto/bcmfs: not in enabled drivers build config 00:02:48.396 crypto/caam_jr: not in enabled drivers build config 00:02:48.396 crypto/ccp: not in enabled drivers build config 00:02:48.396 crypto/cnxk: not in enabled drivers build config 00:02:48.396 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.396 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.396 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.396 crypto/mlx5: not in enabled drivers build config 00:02:48.396 crypto/mvsam: not in enabled drivers build config 00:02:48.396 crypto/nitrox: not in enabled drivers build config 00:02:48.396 crypto/null: not in enabled drivers build config 00:02:48.396 crypto/octeontx: not in enabled drivers build config 00:02:48.396 crypto/openssl: not in enabled drivers build config 00:02:48.396 crypto/scheduler: not in enabled drivers build config 00:02:48.396 crypto/uadk: not in enabled drivers build config 00:02:48.396 crypto/virtio: not in enabled drivers build config 00:02:48.396 compress/isal: not in enabled drivers build config 00:02:48.396 compress/mlx5: not in enabled drivers build config 00:02:48.396 compress/octeontx: not in enabled drivers build config 00:02:48.396 compress/zlib: not in enabled drivers build config 00:02:48.396 regex/*: missing internal dependency, "regexdev" 00:02:48.396 ml/*: missing internal dependency, "mldev" 00:02:48.396 vdpa/ifc: not in enabled drivers build config 00:02:48.396 vdpa/mlx5: not in enabled drivers build config 00:02:48.396 vdpa/nfp: not in enabled drivers build config 00:02:48.396 vdpa/sfc: not in enabled drivers build config 00:02:48.396 event/*: missing internal dependency, "eventdev" 00:02:48.396 baseband/*: missing internal dependency, "bbdev" 00:02:48.396 gpu/*: missing internal dependency, "gpudev" 00:02:48.396 00:02:48.396 00:02:48.396 Build targets in project: 85 00:02:48.396 00:02:48.396 DPDK 23.11.0 00:02:48.396 00:02:48.396 User defined options 00:02:48.396 buildtype : debug 00:02:48.396 default_library : shared 00:02:48.396 libdir : lib 00:02:48.396 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:48.396 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:48.396 c_link_args : 00:02:48.396 cpu_instruction_set: native 00:02:48.396 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:48.396 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:48.396 enable_docs : false 00:02:48.396 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:48.396 enable_kmods : false 00:02:48.396 tests : false 00:02:48.396 00:02:48.396 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.396 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:48.396 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:48.396 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:48.396 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.396 [4/265] Linking static target lib/librte_kvargs.a 00:02:48.396 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.396 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:48.396 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:48.396 [8/265] Linking static target lib/librte_log.a 00:02:48.396 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:48.396 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.396 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.653 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.653 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.653 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:48.653 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.653 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:48.653 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.911 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.911 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.911 [20/265] Linking static target lib/librte_telemetry.a 00:02:48.911 [21/265] Linking target lib/librte_log.so.24.0 00:02:48.911 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.168 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.168 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:49.168 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:49.424 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.424 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.424 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:49.682 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.682 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.682 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.682 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.940 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.940 [34/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.940 [35/265] Linking target lib/librte_telemetry.so.24.0 00:02:50.198 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.198 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:50.198 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.198 [39/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:50.198 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.198 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:50.199 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:50.199 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.456 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.726 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.726 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.726 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:50.726 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.993 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.993 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.993 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.251 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.251 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.251 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:51.251 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.251 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.510 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.768 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.768 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.768 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.769 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.769 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:52.027 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:52.027 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:52.027 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:52.027 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:52.284 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:52.284 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.542 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.542 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.799 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.799 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.799 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.799 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.799 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.799 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.799 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.799 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:53.057 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.315 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:53.315 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.574 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.574 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.574 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.574 [85/265] Linking static target lib/librte_eal.a 00:02:53.834 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.834 [87/265] Linking static target lib/librte_ring.a 00:02:53.834 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.834 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:54.092 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:54.092 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:54.092 [92/265] Linking static target lib/librte_rcu.a 00:02:54.350 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:54.350 [94/265] Linking static target lib/librte_mempool.a 00:02:54.350 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:54.350 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.609 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:54.609 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:54.609 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:54.868 [100/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.868 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.868 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:55.127 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:55.127 [104/265] Linking static target lib/librte_mbuf.a 00:02:55.127 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:55.127 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:55.127 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:55.386 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.386 [109/265] Linking static target lib/librte_meter.a 00:02:55.644 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.644 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:55.644 [112/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:55.644 [113/265] Linking static target lib/librte_net.a 00:02:55.644 [114/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.644 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.903 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.903 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:56.161 [118/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.161 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.420 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:56.420 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.678 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.678 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.937 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.937 [125/265] Linking static target lib/librte_pci.a 00:02:56.937 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.937 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:57.196 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:57.196 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:57.196 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:57.196 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:57.196 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:57.196 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.196 [134/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.455 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.455 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:57.455 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.455 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.455 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.455 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.455 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:57.455 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.455 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:57.713 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:57.713 [145/265] Linking static target lib/librte_ethdev.a 00:02:57.713 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.972 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.972 [148/265] Linking static target lib/librte_cmdline.a 00:02:57.972 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.230 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.230 [151/265] Linking static target lib/librte_timer.a 00:02:58.230 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.230 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.230 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.488 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.488 [156/265] Linking static target lib/librte_hash.a 00:02:58.746 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.746 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.746 [159/265] Linking static target lib/librte_compressdev.a 00:02:58.747 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.005 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.005 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:59.005 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.005 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.263 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:59.521 [166/265] Linking static target lib/librte_dmadev.a 00:02:59.521 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.521 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.521 [169/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.521 [170/265] Linking static target lib/librte_cryptodev.a 00:02:59.521 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.779 [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.779 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.779 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.779 [175/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.038 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.296 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.296 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.296 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.296 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.296 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.554 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.554 [183/265] Linking static target lib/librte_power.a 00:03:00.812 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.812 [185/265] Linking static target lib/librte_reorder.a 00:03:01.070 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.070 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.070 [188/265] Linking static target lib/librte_security.a 00:03:01.070 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.070 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.328 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.328 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.587 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.845 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.845 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:01.845 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.845 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:02.103 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.103 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:02.361 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.361 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.361 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.361 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.619 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.619 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.619 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.619 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.619 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.888 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.888 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.888 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.888 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.888 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.888 [214/265] Linking static target drivers/librte_bus_vdev.a 00:03:02.888 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.888 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.888 [217/265] Linking static target drivers/librte_bus_pci.a 00:03:02.888 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.147 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.147 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.147 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.147 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.147 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.147 [224/265] Linking static target drivers/librte_mempool_ring.a 00:03:03.405 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.348 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:04.348 [227/265] Linking static target lib/librte_vhost.a 00:03:04.914 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.914 [229/265] Linking target lib/librte_eal.so.24.0 00:03:05.172 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:05.172 [231/265] Linking target lib/librte_meter.so.24.0 00:03:05.172 [232/265] Linking target lib/librte_pci.so.24.0 00:03:05.172 [233/265] Linking target lib/librte_timer.so.24.0 00:03:05.172 [234/265] Linking target lib/librte_dmadev.so.24.0 00:03:05.172 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:05.172 [236/265] Linking target lib/librte_ring.so.24.0 00:03:05.429 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:05.429 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:05.429 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:05.429 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:05.429 [241/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.429 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:05.429 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:05.429 [244/265] Linking target lib/librte_rcu.so.24.0 00:03:05.429 [245/265] Linking target lib/librte_mempool.so.24.0 00:03:05.686 [246/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.686 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:05.686 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:05.686 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:05.686 [250/265] Linking target lib/librte_mbuf.so.24.0 00:03:05.943 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:05.943 [252/265] Linking target lib/librte_compressdev.so.24.0 00:03:05.943 [253/265] Linking target lib/librte_reorder.so.24.0 00:03:05.943 [254/265] Linking target lib/librte_net.so.24.0 00:03:05.943 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:03:05.943 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:05.943 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:06.200 [258/265] Linking target lib/librte_cmdline.so.24.0 00:03:06.200 [259/265] Linking target lib/librte_hash.so.24.0 00:03:06.200 [260/265] Linking target lib/librte_security.so.24.0 00:03:06.200 [261/265] Linking target lib/librte_ethdev.so.24.0 00:03:06.200 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:06.200 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:06.458 [264/265] Linking target lib/librte_power.so.24.0 00:03:06.458 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:06.458 INFO: autodetecting backend as ninja 00:03:06.458 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:07.831 CC lib/log/log_flags.o 00:03:07.831 CC lib/log/log_deprecated.o 00:03:07.831 CC lib/log/log.o 00:03:07.831 CC lib/ut_mock/mock.o 00:03:07.831 CC lib/ut/ut.o 00:03:07.831 LIB libspdk_log.a 00:03:07.831 LIB libspdk_ut_mock.a 00:03:07.831 SO libspdk_log.so.6.1 00:03:07.831 SO libspdk_ut_mock.so.5.0 00:03:07.831 LIB libspdk_ut.a 00:03:07.831 SO libspdk_ut.so.1.0 00:03:07.831 SYMLINK libspdk_log.so 00:03:07.831 SYMLINK libspdk_ut_mock.so 00:03:07.831 SYMLINK libspdk_ut.so 00:03:08.088 CXX lib/trace_parser/trace.o 00:03:08.088 CC lib/dma/dma.o 00:03:08.088 CC lib/util/bit_array.o 00:03:08.088 CC lib/util/base64.o 00:03:08.088 CC lib/util/crc16.o 00:03:08.088 CC lib/util/crc32.o 00:03:08.088 CC lib/util/cpuset.o 00:03:08.088 CC lib/util/crc32c.o 00:03:08.088 CC lib/ioat/ioat.o 00:03:08.088 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.088 CC lib/vfio_user/host/vfio_user.o 00:03:08.088 CC lib/util/crc32_ieee.o 00:03:08.346 LIB libspdk_dma.a 00:03:08.346 CC lib/util/crc64.o 00:03:08.346 CC lib/util/dif.o 00:03:08.346 CC lib/util/fd.o 00:03:08.346 SO libspdk_dma.so.3.0 00:03:08.346 CC lib/util/file.o 00:03:08.346 CC lib/util/hexlify.o 00:03:08.346 SYMLINK libspdk_dma.so 00:03:08.346 CC lib/util/iov.o 00:03:08.346 CC lib/util/math.o 00:03:08.346 CC lib/util/pipe.o 00:03:08.346 LIB libspdk_vfio_user.a 00:03:08.346 CC lib/util/strerror_tls.o 00:03:08.346 SO libspdk_vfio_user.so.4.0 00:03:08.346 LIB libspdk_ioat.a 00:03:08.346 SO libspdk_ioat.so.6.0 00:03:08.346 CC lib/util/string.o 00:03:08.346 CC lib/util/uuid.o 00:03:08.604 SYMLINK libspdk_vfio_user.so 00:03:08.604 CC lib/util/fd_group.o 00:03:08.604 CC lib/util/xor.o 00:03:08.604 SYMLINK libspdk_ioat.so 00:03:08.604 CC lib/util/zipf.o 00:03:08.862 LIB libspdk_util.a 00:03:08.862 SO libspdk_util.so.8.0 00:03:09.119 SYMLINK libspdk_util.so 00:03:09.119 CC lib/conf/conf.o 00:03:09.119 CC lib/env_dpdk/env.o 00:03:09.119 CC lib/env_dpdk/memory.o 00:03:09.119 CC lib/env_dpdk/pci.o 00:03:09.119 CC lib/idxd/idxd.o 00:03:09.119 CC lib/vmd/vmd.o 00:03:09.119 CC lib/env_dpdk/init.o 00:03:09.119 CC lib/json/json_parse.o 00:03:09.119 CC lib/rdma/common.o 00:03:09.119 LIB libspdk_trace_parser.a 00:03:09.119 SO libspdk_trace_parser.so.4.0 00:03:09.377 SYMLINK libspdk_trace_parser.so 00:03:09.377 CC lib/rdma/rdma_verbs.o 00:03:09.377 LIB libspdk_conf.a 00:03:09.377 CC lib/json/json_util.o 00:03:09.377 SO libspdk_conf.so.5.0 00:03:09.377 CC lib/json/json_write.o 00:03:09.377 SYMLINK libspdk_conf.so 00:03:09.377 CC lib/env_dpdk/threads.o 00:03:09.634 CC lib/vmd/led.o 00:03:09.634 LIB libspdk_rdma.a 00:03:09.634 CC lib/env_dpdk/pci_ioat.o 00:03:09.634 SO libspdk_rdma.so.5.0 00:03:09.634 CC lib/idxd/idxd_user.o 00:03:09.634 SYMLINK libspdk_rdma.so 00:03:09.634 CC lib/env_dpdk/pci_virtio.o 00:03:09.634 CC lib/env_dpdk/pci_vmd.o 00:03:09.634 CC lib/env_dpdk/pci_idxd.o 00:03:09.634 CC lib/env_dpdk/pci_event.o 00:03:09.634 CC lib/idxd/idxd_kernel.o 00:03:09.891 LIB libspdk_json.a 00:03:09.891 CC lib/env_dpdk/sigbus_handler.o 00:03:09.891 CC lib/env_dpdk/pci_dpdk.o 00:03:09.891 SO libspdk_json.so.5.1 00:03:09.891 LIB libspdk_vmd.a 00:03:09.891 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:09.891 SO libspdk_vmd.so.5.0 00:03:09.891 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:09.891 SYMLINK libspdk_json.so 00:03:09.891 LIB libspdk_idxd.a 00:03:09.891 SYMLINK libspdk_vmd.so 00:03:09.891 SO libspdk_idxd.so.11.0 00:03:09.891 SYMLINK libspdk_idxd.so 00:03:09.891 CC lib/jsonrpc/jsonrpc_server.o 00:03:09.891 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:09.891 CC lib/jsonrpc/jsonrpc_client.o 00:03:09.891 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.455 LIB libspdk_jsonrpc.a 00:03:10.455 SO libspdk_jsonrpc.so.5.1 00:03:10.455 SYMLINK libspdk_jsonrpc.so 00:03:10.455 CC lib/rpc/rpc.o 00:03:10.712 LIB libspdk_env_dpdk.a 00:03:10.712 LIB libspdk_rpc.a 00:03:10.712 SO libspdk_rpc.so.5.0 00:03:10.712 SO libspdk_env_dpdk.so.13.0 00:03:10.971 SYMLINK libspdk_rpc.so 00:03:10.971 SYMLINK libspdk_env_dpdk.so 00:03:10.971 CC lib/notify/notify.o 00:03:10.971 CC lib/notify/notify_rpc.o 00:03:10.971 CC lib/trace/trace.o 00:03:10.971 CC lib/sock/sock.o 00:03:10.971 CC lib/sock/sock_rpc.o 00:03:10.971 CC lib/trace/trace_flags.o 00:03:10.971 CC lib/trace/trace_rpc.o 00:03:11.229 LIB libspdk_notify.a 00:03:11.229 SO libspdk_notify.so.5.0 00:03:11.229 LIB libspdk_trace.a 00:03:11.229 SYMLINK libspdk_notify.so 00:03:11.229 SO libspdk_trace.so.9.0 00:03:11.488 SYMLINK libspdk_trace.so 00:03:11.488 LIB libspdk_sock.a 00:03:11.488 SO libspdk_sock.so.8.0 00:03:11.488 SYMLINK libspdk_sock.so 00:03:11.488 CC lib/thread/thread.o 00:03:11.488 CC lib/thread/iobuf.o 00:03:11.746 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:11.746 CC lib/nvme/nvme_ctrlr.o 00:03:11.746 CC lib/nvme/nvme_fabric.o 00:03:11.746 CC lib/nvme/nvme_ns_cmd.o 00:03:11.746 CC lib/nvme/nvme_ns.o 00:03:11.746 CC lib/nvme/nvme_pcie.o 00:03:11.746 CC lib/nvme/nvme_pcie_common.o 00:03:11.746 CC lib/nvme/nvme_qpair.o 00:03:11.746 CC lib/nvme/nvme.o 00:03:12.313 CC lib/nvme/nvme_quirks.o 00:03:12.313 CC lib/nvme/nvme_transport.o 00:03:12.570 CC lib/nvme/nvme_discovery.o 00:03:12.570 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:12.570 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:12.570 CC lib/nvme/nvme_tcp.o 00:03:12.828 CC lib/nvme/nvme_opal.o 00:03:12.828 CC lib/nvme/nvme_io_msg.o 00:03:13.086 CC lib/nvme/nvme_poll_group.o 00:03:13.087 LIB libspdk_thread.a 00:03:13.087 CC lib/nvme/nvme_zns.o 00:03:13.087 CC lib/nvme/nvme_cuse.o 00:03:13.087 SO libspdk_thread.so.9.0 00:03:13.345 CC lib/nvme/nvme_vfio_user.o 00:03:13.345 SYMLINK libspdk_thread.so 00:03:13.345 CC lib/nvme/nvme_rdma.o 00:03:13.345 CC lib/accel/accel.o 00:03:13.604 CC lib/blob/blobstore.o 00:03:13.604 CC lib/blob/request.o 00:03:13.604 CC lib/blob/zeroes.o 00:03:13.863 CC lib/blob/blob_bs_dev.o 00:03:13.863 CC lib/accel/accel_rpc.o 00:03:13.863 CC lib/accel/accel_sw.o 00:03:14.121 CC lib/init/json_config.o 00:03:14.121 CC lib/init/subsystem.o 00:03:14.121 CC lib/virtio/virtio.o 00:03:14.121 CC lib/init/subsystem_rpc.o 00:03:14.121 CC lib/virtio/virtio_vhost_user.o 00:03:14.121 CC lib/vfu_tgt/tgt_endpoint.o 00:03:14.121 CC lib/vfu_tgt/tgt_rpc.o 00:03:14.121 CC lib/init/rpc.o 00:03:14.380 CC lib/virtio/virtio_vfio_user.o 00:03:14.380 CC lib/virtio/virtio_pci.o 00:03:14.380 LIB libspdk_init.a 00:03:14.380 SO libspdk_init.so.4.0 00:03:14.380 LIB libspdk_accel.a 00:03:14.380 SO libspdk_accel.so.14.0 00:03:14.380 SYMLINK libspdk_init.so 00:03:14.638 LIB libspdk_vfu_tgt.a 00:03:14.638 SO libspdk_vfu_tgt.so.2.0 00:03:14.638 SYMLINK libspdk_accel.so 00:03:14.638 LIB libspdk_virtio.a 00:03:14.638 SYMLINK libspdk_vfu_tgt.so 00:03:14.638 CC lib/event/app.o 00:03:14.638 SO libspdk_virtio.so.6.0 00:03:14.638 CC lib/event/log_rpc.o 00:03:14.638 CC lib/event/reactor.o 00:03:14.638 CC lib/event/app_rpc.o 00:03:14.638 CC lib/event/scheduler_static.o 00:03:14.638 LIB libspdk_nvme.a 00:03:14.638 CC lib/bdev/bdev.o 00:03:14.638 CC lib/bdev/bdev_rpc.o 00:03:14.638 SYMLINK libspdk_virtio.so 00:03:14.638 CC lib/bdev/bdev_zone.o 00:03:14.897 CC lib/bdev/part.o 00:03:14.897 SO libspdk_nvme.so.12.0 00:03:14.897 CC lib/bdev/scsi_nvme.o 00:03:15.155 LIB libspdk_event.a 00:03:15.155 SO libspdk_event.so.12.0 00:03:15.155 SYMLINK libspdk_nvme.so 00:03:15.155 SYMLINK libspdk_event.so 00:03:16.543 LIB libspdk_blob.a 00:03:16.543 SO libspdk_blob.so.10.1 00:03:16.543 SYMLINK libspdk_blob.so 00:03:16.818 CC lib/lvol/lvol.o 00:03:16.818 CC lib/blobfs/blobfs.o 00:03:16.818 CC lib/blobfs/tree.o 00:03:17.381 LIB libspdk_bdev.a 00:03:17.639 SO libspdk_bdev.so.14.0 00:03:17.639 LIB libspdk_blobfs.a 00:03:17.639 SYMLINK libspdk_bdev.so 00:03:17.639 SO libspdk_blobfs.so.9.0 00:03:17.639 LIB libspdk_lvol.a 00:03:17.639 SO libspdk_lvol.so.9.1 00:03:17.639 SYMLINK libspdk_blobfs.so 00:03:17.639 CC lib/scsi/dev.o 00:03:17.639 CC lib/scsi/lun.o 00:03:17.639 CC lib/scsi/port.o 00:03:17.639 CC lib/scsi/scsi.o 00:03:17.639 CC lib/scsi/scsi_bdev.o 00:03:17.639 CC lib/nvmf/ctrlr.o 00:03:17.639 CC lib/ublk/ublk.o 00:03:17.639 CC lib/nbd/nbd.o 00:03:17.639 CC lib/ftl/ftl_core.o 00:03:17.897 SYMLINK libspdk_lvol.so 00:03:17.897 CC lib/ftl/ftl_init.o 00:03:17.897 CC lib/ftl/ftl_layout.o 00:03:17.897 CC lib/nvmf/ctrlr_discovery.o 00:03:17.897 CC lib/ublk/ublk_rpc.o 00:03:18.155 CC lib/nbd/nbd_rpc.o 00:03:18.155 CC lib/nvmf/ctrlr_bdev.o 00:03:18.155 CC lib/ftl/ftl_debug.o 00:03:18.155 CC lib/ftl/ftl_io.o 00:03:18.155 CC lib/scsi/scsi_pr.o 00:03:18.155 LIB libspdk_nbd.a 00:03:18.155 SO libspdk_nbd.so.6.0 00:03:18.155 CC lib/nvmf/subsystem.o 00:03:18.155 CC lib/nvmf/nvmf.o 00:03:18.413 SYMLINK libspdk_nbd.so 00:03:18.413 CC lib/scsi/scsi_rpc.o 00:03:18.413 CC lib/scsi/task.o 00:03:18.413 LIB libspdk_ublk.a 00:03:18.413 CC lib/nvmf/nvmf_rpc.o 00:03:18.413 CC lib/ftl/ftl_sb.o 00:03:18.413 CC lib/ftl/ftl_l2p.o 00:03:18.413 SO libspdk_ublk.so.2.0 00:03:18.671 SYMLINK libspdk_ublk.so 00:03:18.671 CC lib/ftl/ftl_l2p_flat.o 00:03:18.671 CC lib/nvmf/transport.o 00:03:18.671 LIB libspdk_scsi.a 00:03:18.671 CC lib/ftl/ftl_nv_cache.o 00:03:18.671 SO libspdk_scsi.so.8.0 00:03:18.671 CC lib/ftl/ftl_band.o 00:03:18.671 CC lib/ftl/ftl_band_ops.o 00:03:18.671 SYMLINK libspdk_scsi.so 00:03:18.671 CC lib/ftl/ftl_writer.o 00:03:18.671 CC lib/ftl/ftl_rq.o 00:03:18.929 CC lib/ftl/ftl_reloc.o 00:03:18.929 CC lib/ftl/ftl_l2p_cache.o 00:03:19.187 CC lib/ftl/ftl_p2l.o 00:03:19.187 CC lib/nvmf/tcp.o 00:03:19.187 CC lib/iscsi/conn.o 00:03:19.187 CC lib/nvmf/vfio_user.o 00:03:19.187 CC lib/ftl/mngt/ftl_mngt.o 00:03:19.444 CC lib/iscsi/init_grp.o 00:03:19.444 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:19.444 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:19.444 CC lib/vhost/vhost.o 00:03:19.701 CC lib/vhost/vhost_rpc.o 00:03:19.701 CC lib/vhost/vhost_scsi.o 00:03:19.701 CC lib/nvmf/rdma.o 00:03:19.701 CC lib/iscsi/iscsi.o 00:03:19.701 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:19.701 CC lib/iscsi/md5.o 00:03:19.701 CC lib/iscsi/param.o 00:03:19.959 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:19.959 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:20.216 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:20.216 CC lib/vhost/vhost_blk.o 00:03:20.216 CC lib/iscsi/portal_grp.o 00:03:20.216 CC lib/iscsi/tgt_node.o 00:03:20.216 CC lib/vhost/rte_vhost_user.o 00:03:20.473 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:20.473 CC lib/iscsi/iscsi_subsystem.o 00:03:20.473 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:20.473 CC lib/iscsi/iscsi_rpc.o 00:03:20.732 CC lib/iscsi/task.o 00:03:20.732 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:20.732 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:20.732 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:20.989 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:20.989 CC lib/ftl/utils/ftl_conf.o 00:03:20.989 CC lib/ftl/utils/ftl_md.o 00:03:20.989 CC lib/ftl/utils/ftl_mempool.o 00:03:20.989 CC lib/ftl/utils/ftl_bitmap.o 00:03:20.989 LIB libspdk_iscsi.a 00:03:20.989 CC lib/ftl/utils/ftl_property.o 00:03:20.989 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:21.246 SO libspdk_iscsi.so.7.0 00:03:21.246 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:21.246 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:21.246 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:21.246 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:21.246 SYMLINK libspdk_iscsi.so 00:03:21.246 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:21.246 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:21.246 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:21.246 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:21.505 LIB libspdk_vhost.a 00:03:21.505 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:21.505 CC lib/ftl/base/ftl_base_dev.o 00:03:21.505 CC lib/ftl/base/ftl_base_bdev.o 00:03:21.505 CC lib/ftl/ftl_trace.o 00:03:21.505 SO libspdk_vhost.so.7.1 00:03:21.505 SYMLINK libspdk_vhost.so 00:03:21.763 LIB libspdk_nvmf.a 00:03:21.763 LIB libspdk_ftl.a 00:03:21.763 SO libspdk_nvmf.so.17.0 00:03:22.023 SO libspdk_ftl.so.8.0 00:03:22.023 SYMLINK libspdk_nvmf.so 00:03:22.282 SYMLINK libspdk_ftl.so 00:03:22.282 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.282 CC module/vfu_device/vfu_virtio.o 00:03:22.540 CC module/sock/posix/posix.o 00:03:22.540 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.540 CC module/blob/bdev/blob_bdev.o 00:03:22.540 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.540 CC module/sock/uring/uring.o 00:03:22.540 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.540 CC module/accel/error/accel_error.o 00:03:22.540 CC module/accel/ioat/accel_ioat.o 00:03:22.540 LIB libspdk_env_dpdk_rpc.a 00:03:22.540 SO libspdk_env_dpdk_rpc.so.5.0 00:03:22.540 LIB libspdk_scheduler_gscheduler.a 00:03:22.540 LIB libspdk_scheduler_dpdk_governor.a 00:03:22.540 SYMLINK libspdk_env_dpdk_rpc.so 00:03:22.540 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.540 SO libspdk_scheduler_gscheduler.so.3.0 00:03:22.540 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:22.540 LIB libspdk_scheduler_dynamic.a 00:03:22.540 CC module/accel/error/accel_error_rpc.o 00:03:22.798 SO libspdk_scheduler_dynamic.so.3.0 00:03:22.798 SYMLINK libspdk_scheduler_gscheduler.so 00:03:22.798 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:22.798 CC module/vfu_device/vfu_virtio_blk.o 00:03:22.798 LIB libspdk_blob_bdev.a 00:03:22.798 SYMLINK libspdk_scheduler_dynamic.so 00:03:22.798 CC module/vfu_device/vfu_virtio_scsi.o 00:03:22.798 LIB libspdk_accel_ioat.a 00:03:22.798 SO libspdk_blob_bdev.so.10.1 00:03:22.798 CC module/accel/dsa/accel_dsa.o 00:03:22.798 CC module/accel/iaa/accel_iaa.o 00:03:22.798 LIB libspdk_accel_error.a 00:03:22.798 SO libspdk_accel_ioat.so.5.0 00:03:22.798 SO libspdk_accel_error.so.1.0 00:03:22.798 SYMLINK libspdk_blob_bdev.so 00:03:22.798 CC module/accel/iaa/accel_iaa_rpc.o 00:03:22.798 SYMLINK libspdk_accel_ioat.so 00:03:22.798 SYMLINK libspdk_accel_error.so 00:03:22.798 CC module/vfu_device/vfu_virtio_rpc.o 00:03:23.055 LIB libspdk_accel_iaa.a 00:03:23.055 CC module/accel/dsa/accel_dsa_rpc.o 00:03:23.055 CC module/bdev/delay/vbdev_delay.o 00:03:23.055 SO libspdk_accel_iaa.so.2.0 00:03:23.055 CC module/bdev/error/vbdev_error.o 00:03:23.055 CC module/bdev/gpt/gpt.o 00:03:23.055 SYMLINK libspdk_accel_iaa.so 00:03:23.055 LIB libspdk_vfu_device.a 00:03:23.055 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.055 LIB libspdk_sock_uring.a 00:03:23.055 SO libspdk_vfu_device.so.2.0 00:03:23.055 LIB libspdk_accel_dsa.a 00:03:23.055 SO libspdk_sock_uring.so.4.0 00:03:23.055 LIB libspdk_sock_posix.a 00:03:23.313 SO libspdk_accel_dsa.so.4.0 00:03:23.313 SO libspdk_sock_posix.so.5.0 00:03:23.313 CC module/bdev/null/bdev_null.o 00:03:23.313 CC module/bdev/malloc/bdev_malloc.o 00:03:23.313 SYMLINK libspdk_sock_uring.so 00:03:23.313 SYMLINK libspdk_vfu_device.so 00:03:23.313 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:23.313 SYMLINK libspdk_accel_dsa.so 00:03:23.313 CC module/bdev/gpt/vbdev_gpt.o 00:03:23.313 SYMLINK libspdk_sock_posix.so 00:03:23.313 CC module/bdev/error/vbdev_error_rpc.o 00:03:23.313 CC module/bdev/nvme/bdev_nvme.o 00:03:23.313 CC module/bdev/passthru/vbdev_passthru.o 00:03:23.313 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:23.313 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:23.313 CC module/bdev/raid/bdev_raid.o 00:03:23.575 CC module/bdev/null/bdev_null_rpc.o 00:03:23.575 LIB libspdk_bdev_error.a 00:03:23.575 SO libspdk_bdev_error.so.5.0 00:03:23.575 LIB libspdk_bdev_gpt.a 00:03:23.575 LIB libspdk_bdev_malloc.a 00:03:23.575 LIB libspdk_bdev_delay.a 00:03:23.575 CC module/bdev/raid/bdev_raid_rpc.o 00:03:23.575 SO libspdk_bdev_gpt.so.5.0 00:03:23.576 SO libspdk_bdev_malloc.so.5.0 00:03:23.576 SO libspdk_bdev_delay.so.5.0 00:03:23.576 SYMLINK libspdk_bdev_error.so 00:03:23.576 CC module/bdev/raid/bdev_raid_sb.o 00:03:23.576 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.576 LIB libspdk_bdev_null.a 00:03:23.576 SYMLINK libspdk_bdev_delay.so 00:03:23.576 CC module/bdev/raid/raid0.o 00:03:23.576 LIB libspdk_bdev_passthru.a 00:03:23.576 SYMLINK libspdk_bdev_gpt.so 00:03:23.576 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.576 SO libspdk_bdev_null.so.5.0 00:03:23.576 SYMLINK libspdk_bdev_malloc.so 00:03:23.835 CC module/bdev/nvme/nvme_rpc.o 00:03:23.835 SO libspdk_bdev_passthru.so.5.0 00:03:23.835 SYMLINK libspdk_bdev_null.so 00:03:23.835 SYMLINK libspdk_bdev_passthru.so 00:03:23.835 CC module/bdev/nvme/bdev_mdns_client.o 00:03:23.835 CC module/bdev/nvme/vbdev_opal.o 00:03:23.835 CC module/bdev/split/vbdev_split.o 00:03:23.835 CC module/bdev/raid/raid1.o 00:03:23.835 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:24.104 LIB libspdk_bdev_lvol.a 00:03:24.104 SO libspdk_bdev_lvol.so.5.0 00:03:24.104 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.104 CC module/bdev/uring/bdev_uring.o 00:03:24.104 SYMLINK libspdk_bdev_lvol.so 00:03:24.104 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:24.104 CC module/bdev/split/vbdev_split_rpc.o 00:03:24.104 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:24.364 CC module/bdev/uring/bdev_uring_rpc.o 00:03:24.364 LIB libspdk_bdev_split.a 00:03:24.364 LIB libspdk_bdev_zone_block.a 00:03:24.364 CC module/blobfs/bdev/blobfs_bdev.o 00:03:24.364 SO libspdk_bdev_split.so.5.0 00:03:24.364 CC module/bdev/raid/concat.o 00:03:24.364 SO libspdk_bdev_zone_block.so.5.0 00:03:24.364 CC module/bdev/aio/bdev_aio.o 00:03:24.364 SYMLINK libspdk_bdev_split.so 00:03:24.364 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.364 CC module/bdev/ftl/bdev_ftl.o 00:03:24.364 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:24.364 SYMLINK libspdk_bdev_zone_block.so 00:03:24.364 LIB libspdk_bdev_uring.a 00:03:24.364 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.364 SO libspdk_bdev_uring.so.5.0 00:03:24.625 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.625 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.625 SYMLINK libspdk_bdev_uring.so 00:03:24.625 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.625 LIB libspdk_bdev_raid.a 00:03:24.625 LIB libspdk_blobfs_bdev.a 00:03:24.625 SO libspdk_bdev_raid.so.5.0 00:03:24.625 SO libspdk_blobfs_bdev.so.5.0 00:03:24.625 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.625 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.625 SYMLINK libspdk_bdev_raid.so 00:03:24.625 SYMLINK libspdk_blobfs_bdev.so 00:03:24.625 LIB libspdk_bdev_aio.a 00:03:24.625 LIB libspdk_bdev_ftl.a 00:03:24.625 SO libspdk_bdev_aio.so.5.0 00:03:24.625 SO libspdk_bdev_ftl.so.5.0 00:03:24.887 SYMLINK libspdk_bdev_aio.so 00:03:24.887 SYMLINK libspdk_bdev_ftl.so 00:03:24.887 LIB libspdk_bdev_iscsi.a 00:03:24.887 SO libspdk_bdev_iscsi.so.5.0 00:03:24.887 SYMLINK libspdk_bdev_iscsi.so 00:03:25.146 LIB libspdk_bdev_virtio.a 00:03:25.146 SO libspdk_bdev_virtio.so.5.0 00:03:25.146 SYMLINK libspdk_bdev_virtio.so 00:03:25.404 LIB libspdk_bdev_nvme.a 00:03:25.673 SO libspdk_bdev_nvme.so.6.0 00:03:25.673 SYMLINK libspdk_bdev_nvme.so 00:03:25.931 CC module/event/subsystems/vmd/vmd.o 00:03:25.931 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:25.931 CC module/event/subsystems/sock/sock.o 00:03:25.931 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:25.931 CC module/event/subsystems/scheduler/scheduler.o 00:03:25.931 CC module/event/subsystems/iobuf/iobuf.o 00:03:25.931 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:25.931 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:26.189 LIB libspdk_event_sock.a 00:03:26.189 LIB libspdk_event_vmd.a 00:03:26.189 LIB libspdk_event_scheduler.a 00:03:26.189 LIB libspdk_event_vfu_tgt.a 00:03:26.189 LIB libspdk_event_vhost_blk.a 00:03:26.189 SO libspdk_event_sock.so.4.0 00:03:26.189 SO libspdk_event_scheduler.so.3.0 00:03:26.189 SO libspdk_event_vfu_tgt.so.2.0 00:03:26.189 SO libspdk_event_vhost_blk.so.2.0 00:03:26.189 SO libspdk_event_vmd.so.5.0 00:03:26.189 LIB libspdk_event_iobuf.a 00:03:26.189 SYMLINK libspdk_event_sock.so 00:03:26.189 SO libspdk_event_iobuf.so.2.0 00:03:26.189 SYMLINK libspdk_event_scheduler.so 00:03:26.189 SYMLINK libspdk_event_vfu_tgt.so 00:03:26.189 SYMLINK libspdk_event_vhost_blk.so 00:03:26.190 SYMLINK libspdk_event_vmd.so 00:03:26.190 SYMLINK libspdk_event_iobuf.so 00:03:26.448 CC module/event/subsystems/accel/accel.o 00:03:26.448 LIB libspdk_event_accel.a 00:03:26.448 SO libspdk_event_accel.so.5.0 00:03:26.706 SYMLINK libspdk_event_accel.so 00:03:26.706 CC module/event/subsystems/bdev/bdev.o 00:03:26.965 LIB libspdk_event_bdev.a 00:03:26.965 SO libspdk_event_bdev.so.5.0 00:03:26.965 SYMLINK libspdk_event_bdev.so 00:03:27.223 CC module/event/subsystems/scsi/scsi.o 00:03:27.223 CC module/event/subsystems/ublk/ublk.o 00:03:27.223 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:27.223 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:27.223 CC module/event/subsystems/nbd/nbd.o 00:03:27.223 LIB libspdk_event_ublk.a 00:03:27.482 SO libspdk_event_ublk.so.2.0 00:03:27.482 LIB libspdk_event_nbd.a 00:03:27.482 LIB libspdk_event_scsi.a 00:03:27.482 SO libspdk_event_nbd.so.5.0 00:03:27.482 SO libspdk_event_scsi.so.5.0 00:03:27.482 SYMLINK libspdk_event_ublk.so 00:03:27.482 LIB libspdk_event_nvmf.a 00:03:27.482 SYMLINK libspdk_event_nbd.so 00:03:27.482 SO libspdk_event_nvmf.so.5.0 00:03:27.482 SYMLINK libspdk_event_scsi.so 00:03:27.482 SYMLINK libspdk_event_nvmf.so 00:03:27.741 CC module/event/subsystems/iscsi/iscsi.o 00:03:27.741 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:27.741 LIB libspdk_event_vhost_scsi.a 00:03:27.741 LIB libspdk_event_iscsi.a 00:03:27.741 SO libspdk_event_vhost_scsi.so.2.0 00:03:27.741 SO libspdk_event_iscsi.so.5.0 00:03:28.000 SYMLINK libspdk_event_vhost_scsi.so 00:03:28.000 SYMLINK libspdk_event_iscsi.so 00:03:28.000 SO libspdk.so.5.0 00:03:28.000 SYMLINK libspdk.so 00:03:28.259 TEST_HEADER include/spdk/accel.h 00:03:28.259 TEST_HEADER include/spdk/accel_module.h 00:03:28.259 TEST_HEADER include/spdk/assert.h 00:03:28.259 TEST_HEADER include/spdk/barrier.h 00:03:28.259 TEST_HEADER include/spdk/base64.h 00:03:28.259 CXX app/trace/trace.o 00:03:28.259 TEST_HEADER include/spdk/bdev.h 00:03:28.259 TEST_HEADER include/spdk/bdev_module.h 00:03:28.259 TEST_HEADER include/spdk/bdev_zone.h 00:03:28.259 TEST_HEADER include/spdk/bit_array.h 00:03:28.259 TEST_HEADER include/spdk/bit_pool.h 00:03:28.259 TEST_HEADER include/spdk/blob_bdev.h 00:03:28.259 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:28.259 TEST_HEADER include/spdk/blobfs.h 00:03:28.259 TEST_HEADER include/spdk/blob.h 00:03:28.259 TEST_HEADER include/spdk/conf.h 00:03:28.259 TEST_HEADER include/spdk/config.h 00:03:28.259 TEST_HEADER include/spdk/cpuset.h 00:03:28.259 TEST_HEADER include/spdk/crc16.h 00:03:28.259 TEST_HEADER include/spdk/crc32.h 00:03:28.259 TEST_HEADER include/spdk/crc64.h 00:03:28.259 TEST_HEADER include/spdk/dif.h 00:03:28.259 TEST_HEADER include/spdk/dma.h 00:03:28.259 TEST_HEADER include/spdk/endian.h 00:03:28.259 TEST_HEADER include/spdk/env_dpdk.h 00:03:28.259 TEST_HEADER include/spdk/env.h 00:03:28.259 TEST_HEADER include/spdk/event.h 00:03:28.259 TEST_HEADER include/spdk/fd_group.h 00:03:28.259 CC examples/accel/perf/accel_perf.o 00:03:28.259 TEST_HEADER include/spdk/fd.h 00:03:28.259 TEST_HEADER include/spdk/file.h 00:03:28.259 TEST_HEADER include/spdk/ftl.h 00:03:28.259 CC examples/ioat/perf/perf.o 00:03:28.259 TEST_HEADER include/spdk/gpt_spec.h 00:03:28.259 TEST_HEADER include/spdk/hexlify.h 00:03:28.259 TEST_HEADER include/spdk/histogram_data.h 00:03:28.259 TEST_HEADER include/spdk/idxd.h 00:03:28.259 TEST_HEADER include/spdk/idxd_spec.h 00:03:28.259 TEST_HEADER include/spdk/init.h 00:03:28.259 TEST_HEADER include/spdk/ioat.h 00:03:28.259 TEST_HEADER include/spdk/ioat_spec.h 00:03:28.259 TEST_HEADER include/spdk/iscsi_spec.h 00:03:28.259 TEST_HEADER include/spdk/json.h 00:03:28.259 TEST_HEADER include/spdk/jsonrpc.h 00:03:28.259 TEST_HEADER include/spdk/likely.h 00:03:28.259 CC test/bdev/bdevio/bdevio.o 00:03:28.259 TEST_HEADER include/spdk/log.h 00:03:28.259 CC test/accel/dif/dif.o 00:03:28.259 CC examples/bdev/hello_world/hello_bdev.o 00:03:28.259 TEST_HEADER include/spdk/lvol.h 00:03:28.259 CC test/blobfs/mkfs/mkfs.o 00:03:28.259 TEST_HEADER include/spdk/memory.h 00:03:28.259 TEST_HEADER include/spdk/mmio.h 00:03:28.259 TEST_HEADER include/spdk/nbd.h 00:03:28.259 CC test/app/bdev_svc/bdev_svc.o 00:03:28.259 TEST_HEADER include/spdk/notify.h 00:03:28.259 TEST_HEADER include/spdk/nvme.h 00:03:28.259 CC examples/blob/hello_world/hello_blob.o 00:03:28.259 TEST_HEADER include/spdk/nvme_intel.h 00:03:28.259 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:28.259 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:28.259 TEST_HEADER include/spdk/nvme_spec.h 00:03:28.259 TEST_HEADER include/spdk/nvme_zns.h 00:03:28.259 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:28.259 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:28.259 TEST_HEADER include/spdk/nvmf.h 00:03:28.259 TEST_HEADER include/spdk/nvmf_spec.h 00:03:28.259 TEST_HEADER include/spdk/nvmf_transport.h 00:03:28.259 TEST_HEADER include/spdk/opal.h 00:03:28.259 TEST_HEADER include/spdk/opal_spec.h 00:03:28.259 TEST_HEADER include/spdk/pci_ids.h 00:03:28.259 TEST_HEADER include/spdk/pipe.h 00:03:28.259 TEST_HEADER include/spdk/queue.h 00:03:28.259 TEST_HEADER include/spdk/reduce.h 00:03:28.259 TEST_HEADER include/spdk/rpc.h 00:03:28.259 TEST_HEADER include/spdk/scheduler.h 00:03:28.259 TEST_HEADER include/spdk/scsi.h 00:03:28.259 TEST_HEADER include/spdk/scsi_spec.h 00:03:28.259 TEST_HEADER include/spdk/sock.h 00:03:28.259 TEST_HEADER include/spdk/stdinc.h 00:03:28.259 TEST_HEADER include/spdk/string.h 00:03:28.259 TEST_HEADER include/spdk/thread.h 00:03:28.259 TEST_HEADER include/spdk/trace.h 00:03:28.259 TEST_HEADER include/spdk/trace_parser.h 00:03:28.259 TEST_HEADER include/spdk/tree.h 00:03:28.259 TEST_HEADER include/spdk/ublk.h 00:03:28.259 TEST_HEADER include/spdk/util.h 00:03:28.259 TEST_HEADER include/spdk/uuid.h 00:03:28.259 TEST_HEADER include/spdk/version.h 00:03:28.259 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:28.259 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:28.259 TEST_HEADER include/spdk/vhost.h 00:03:28.518 TEST_HEADER include/spdk/vmd.h 00:03:28.518 TEST_HEADER include/spdk/xor.h 00:03:28.518 TEST_HEADER include/spdk/zipf.h 00:03:28.518 CXX test/cpp_headers/accel.o 00:03:28.518 LINK bdev_svc 00:03:28.518 LINK hello_bdev 00:03:28.518 LINK ioat_perf 00:03:28.518 LINK mkfs 00:03:28.518 CXX test/cpp_headers/accel_module.o 00:03:28.518 LINK hello_blob 00:03:28.518 LINK spdk_trace 00:03:28.785 LINK dif 00:03:28.785 LINK bdevio 00:03:28.785 LINK accel_perf 00:03:28.785 CC examples/ioat/verify/verify.o 00:03:28.785 CXX test/cpp_headers/assert.o 00:03:28.785 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:28.785 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.785 CC app/trace_record/trace_record.o 00:03:28.785 CC test/dma/test_dma/test_dma.o 00:03:28.785 CC examples/blob/cli/blobcli.o 00:03:29.044 CXX test/cpp_headers/barrier.o 00:03:29.044 LINK verify 00:03:29.044 CC app/nvmf_tgt/nvmf_main.o 00:03:29.044 CC app/spdk_tgt/spdk_tgt.o 00:03:29.044 CC app/iscsi_tgt/iscsi_tgt.o 00:03:29.044 CXX test/cpp_headers/base64.o 00:03:29.044 LINK spdk_trace_record 00:03:29.044 CC app/spdk_lspci/spdk_lspci.o 00:03:29.303 LINK nvmf_tgt 00:03:29.303 LINK nvme_fuzz 00:03:29.303 CXX test/cpp_headers/bdev.o 00:03:29.303 LINK iscsi_tgt 00:03:29.303 LINK spdk_tgt 00:03:29.303 LINK test_dma 00:03:29.303 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.303 LINK spdk_lspci 00:03:29.562 CXX test/cpp_headers/bdev_module.o 00:03:29.562 CXX test/cpp_headers/bdev_zone.o 00:03:29.562 LINK blobcli 00:03:29.562 CXX test/cpp_headers/bit_array.o 00:03:29.562 CC app/spdk_nvme_identify/identify.o 00:03:29.562 CC app/spdk_nvme_perf/perf.o 00:03:29.562 CXX test/cpp_headers/bit_pool.o 00:03:29.562 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.562 LINK bdevperf 00:03:29.562 CXX test/cpp_headers/blob_bdev.o 00:03:29.822 CC app/spdk_top/spdk_top.o 00:03:29.822 CC app/spdk_dd/spdk_dd.o 00:03:29.822 CC app/vhost/vhost.o 00:03:29.822 LINK spdk_nvme_discover 00:03:29.822 CC app/fio/nvme/fio_plugin.o 00:03:29.822 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.822 CC examples/nvme/hello_world/hello_world.o 00:03:30.081 LINK vhost 00:03:30.081 CC examples/nvme/reconnect/reconnect.o 00:03:30.081 CXX test/cpp_headers/blobfs.o 00:03:30.081 LINK spdk_dd 00:03:30.081 LINK hello_world 00:03:30.081 CXX test/cpp_headers/blob.o 00:03:30.339 LINK spdk_nvme_identify 00:03:30.339 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.339 CXX test/cpp_headers/conf.o 00:03:30.339 CXX test/cpp_headers/config.o 00:03:30.339 LINK spdk_nvme_perf 00:03:30.339 LINK spdk_nvme 00:03:30.339 LINK reconnect 00:03:30.339 CXX test/cpp_headers/cpuset.o 00:03:30.339 CC app/fio/bdev/fio_plugin.o 00:03:30.597 CC examples/sock/hello_world/hello_sock.o 00:03:30.597 CXX test/cpp_headers/crc16.o 00:03:30.597 CC examples/vmd/lsvmd/lsvmd.o 00:03:30.597 CC examples/vmd/led/led.o 00:03:30.597 LINK spdk_top 00:03:30.597 CC examples/nvme/arbitration/arbitration.o 00:03:30.597 CXX test/cpp_headers/crc32.o 00:03:30.597 LINK lsvmd 00:03:30.597 CC examples/nvme/hotplug/hotplug.o 00:03:30.597 CXX test/cpp_headers/crc64.o 00:03:30.597 LINK led 00:03:30.856 LINK hello_sock 00:03:30.856 LINK nvme_manage 00:03:30.856 CXX test/cpp_headers/dif.o 00:03:30.856 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.856 CC examples/nvme/abort/abort.o 00:03:30.856 CXX test/cpp_headers/dma.o 00:03:30.856 LINK iscsi_fuzz 00:03:30.856 LINK hotplug 00:03:30.856 LINK arbitration 00:03:30.856 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:30.856 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:30.856 LINK spdk_bdev 00:03:31.115 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:31.115 CXX test/cpp_headers/endian.o 00:03:31.115 CXX test/cpp_headers/env_dpdk.o 00:03:31.115 CXX test/cpp_headers/env.o 00:03:31.115 CXX test/cpp_headers/event.o 00:03:31.115 LINK cmb_copy 00:03:31.115 CXX test/cpp_headers/fd_group.o 00:03:31.115 LINK pmr_persistence 00:03:31.115 CXX test/cpp_headers/fd.o 00:03:31.375 CXX test/cpp_headers/file.o 00:03:31.375 CXX test/cpp_headers/ftl.o 00:03:31.375 CXX test/cpp_headers/gpt_spec.o 00:03:31.375 LINK abort 00:03:31.375 CC test/app/histogram_perf/histogram_perf.o 00:03:31.375 CXX test/cpp_headers/hexlify.o 00:03:31.375 CC examples/util/zipf/zipf.o 00:03:31.375 CC examples/nvmf/nvmf/nvmf.o 00:03:31.375 CC examples/thread/thread/thread_ex.o 00:03:31.375 LINK histogram_perf 00:03:31.375 LINK vhost_fuzz 00:03:31.375 CXX test/cpp_headers/histogram_data.o 00:03:31.375 CXX test/cpp_headers/idxd.o 00:03:31.634 CC test/app/stub/stub.o 00:03:31.634 CC test/app/jsoncat/jsoncat.o 00:03:31.634 LINK zipf 00:03:31.634 CC examples/idxd/perf/perf.o 00:03:31.634 CXX test/cpp_headers/idxd_spec.o 00:03:31.634 LINK jsoncat 00:03:31.634 LINK stub 00:03:31.634 LINK nvmf 00:03:31.634 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.634 LINK thread 00:03:31.634 CC test/event/reactor/reactor.o 00:03:31.634 CC test/event/event_perf/event_perf.o 00:03:31.893 CC test/env/mem_callbacks/mem_callbacks.o 00:03:31.893 CXX test/cpp_headers/init.o 00:03:31.893 CC test/event/reactor_perf/reactor_perf.o 00:03:31.893 LINK interrupt_tgt 00:03:31.893 LINK idxd_perf 00:03:31.893 LINK reactor 00:03:31.893 LINK event_perf 00:03:31.893 CC test/rpc_client/rpc_client_test.o 00:03:31.893 CXX test/cpp_headers/ioat.o 00:03:31.893 CC test/nvme/aer/aer.o 00:03:32.151 LINK reactor_perf 00:03:32.151 CC test/lvol/esnap/esnap.o 00:03:32.151 CC test/nvme/reset/reset.o 00:03:32.151 CC test/nvme/e2edp/nvme_dp.o 00:03:32.151 CC test/nvme/sgl/sgl.o 00:03:32.151 CC test/thread/poller_perf/poller_perf.o 00:03:32.151 CXX test/cpp_headers/ioat_spec.o 00:03:32.151 LINK rpc_client_test 00:03:32.151 CC test/event/app_repeat/app_repeat.o 00:03:32.409 LINK poller_perf 00:03:32.409 LINK aer 00:03:32.409 CXX test/cpp_headers/iscsi_spec.o 00:03:32.409 LINK reset 00:03:32.409 LINK app_repeat 00:03:32.409 LINK sgl 00:03:32.409 LINK nvme_dp 00:03:32.409 CC test/event/scheduler/scheduler.o 00:03:32.409 LINK mem_callbacks 00:03:32.409 CXX test/cpp_headers/json.o 00:03:32.409 CC test/nvme/overhead/overhead.o 00:03:32.668 CC test/nvme/err_injection/err_injection.o 00:03:32.668 CC test/env/vtophys/vtophys.o 00:03:32.668 CXX test/cpp_headers/jsonrpc.o 00:03:32.668 CC test/nvme/reserve/reserve.o 00:03:32.668 CC test/nvme/startup/startup.o 00:03:32.668 CC test/nvme/simple_copy/simple_copy.o 00:03:32.668 CC test/nvme/connect_stress/connect_stress.o 00:03:32.668 LINK scheduler 00:03:32.668 LINK err_injection 00:03:32.668 LINK vtophys 00:03:32.668 CXX test/cpp_headers/likely.o 00:03:32.926 LINK startup 00:03:32.926 LINK connect_stress 00:03:32.926 LINK reserve 00:03:32.926 LINK overhead 00:03:32.926 CXX test/cpp_headers/log.o 00:03:32.926 LINK simple_copy 00:03:32.927 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:32.927 CC test/nvme/boot_partition/boot_partition.o 00:03:32.927 CXX test/cpp_headers/lvol.o 00:03:32.927 CXX test/cpp_headers/memory.o 00:03:32.927 CC test/env/memory/memory_ut.o 00:03:32.927 CC test/nvme/compliance/nvme_compliance.o 00:03:32.927 CC test/nvme/fused_ordering/fused_ordering.o 00:03:32.927 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.186 CC test/nvme/fdp/fdp.o 00:03:33.186 LINK env_dpdk_post_init 00:03:33.186 LINK boot_partition 00:03:33.186 CXX test/cpp_headers/mmio.o 00:03:33.186 CC test/nvme/cuse/cuse.o 00:03:33.186 LINK fused_ordering 00:03:33.186 LINK doorbell_aers 00:03:33.186 CXX test/cpp_headers/nbd.o 00:03:33.445 CC test/env/pci/pci_ut.o 00:03:33.445 CXX test/cpp_headers/notify.o 00:03:33.445 LINK nvme_compliance 00:03:33.445 CXX test/cpp_headers/nvme.o 00:03:33.445 CXX test/cpp_headers/nvme_intel.o 00:03:33.445 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.445 LINK fdp 00:03:33.445 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.445 CXX test/cpp_headers/nvme_spec.o 00:03:33.445 CXX test/cpp_headers/nvme_zns.o 00:03:33.445 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.445 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.712 CXX test/cpp_headers/nvmf.o 00:03:33.712 CXX test/cpp_headers/nvmf_spec.o 00:03:33.712 CXX test/cpp_headers/nvmf_transport.o 00:03:33.712 LINK pci_ut 00:03:33.712 CXX test/cpp_headers/opal.o 00:03:33.712 CXX test/cpp_headers/opal_spec.o 00:03:33.712 CXX test/cpp_headers/pci_ids.o 00:03:33.712 CXX test/cpp_headers/pipe.o 00:03:33.982 CXX test/cpp_headers/queue.o 00:03:33.982 CXX test/cpp_headers/reduce.o 00:03:33.982 CXX test/cpp_headers/rpc.o 00:03:33.982 LINK memory_ut 00:03:33.982 CXX test/cpp_headers/scheduler.o 00:03:33.982 CXX test/cpp_headers/scsi.o 00:03:33.982 CXX test/cpp_headers/scsi_spec.o 00:03:33.982 CXX test/cpp_headers/sock.o 00:03:33.982 CXX test/cpp_headers/stdinc.o 00:03:33.982 CXX test/cpp_headers/string.o 00:03:33.982 CXX test/cpp_headers/thread.o 00:03:33.982 CXX test/cpp_headers/trace.o 00:03:33.982 CXX test/cpp_headers/trace_parser.o 00:03:33.982 CXX test/cpp_headers/tree.o 00:03:34.241 CXX test/cpp_headers/ublk.o 00:03:34.241 CXX test/cpp_headers/util.o 00:03:34.241 CXX test/cpp_headers/uuid.o 00:03:34.241 CXX test/cpp_headers/version.o 00:03:34.241 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.241 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.241 CXX test/cpp_headers/vhost.o 00:03:34.241 LINK cuse 00:03:34.241 CXX test/cpp_headers/vmd.o 00:03:34.241 CXX test/cpp_headers/xor.o 00:03:34.241 CXX test/cpp_headers/zipf.o 00:03:36.774 LINK esnap 00:03:36.774 ************************************ 00:03:36.774 END TEST make 00:03:36.774 ************************************ 00:03:36.774 00:03:36.774 real 1m1.651s 00:03:36.774 user 6m32.473s 00:03:36.774 sys 1m22.286s 00:03:36.774 04:18:39 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:36.774 04:18:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.033 04:18:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:37.033 04:18:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:37.033 04:18:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:37.033 04:18:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:37.033 04:18:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:37.033 04:18:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:37.033 04:18:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:37.033 04:18:40 -- scripts/common.sh@335 -- # IFS=.-: 00:03:37.033 04:18:40 -- scripts/common.sh@335 -- # read -ra ver1 00:03:37.033 04:18:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.033 04:18:40 -- scripts/common.sh@336 -- # read -ra ver2 00:03:37.033 04:18:40 -- scripts/common.sh@337 -- # local 'op=<' 00:03:37.033 04:18:40 -- scripts/common.sh@339 -- # ver1_l=2 00:03:37.033 04:18:40 -- scripts/common.sh@340 -- # ver2_l=1 00:03:37.033 04:18:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:37.033 04:18:40 -- scripts/common.sh@343 -- # case "$op" in 00:03:37.033 04:18:40 -- scripts/common.sh@344 -- # : 1 00:03:37.033 04:18:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:37.033 04:18:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.033 04:18:40 -- scripts/common.sh@364 -- # decimal 1 00:03:37.033 04:18:40 -- scripts/common.sh@352 -- # local d=1 00:03:37.033 04:18:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.033 04:18:40 -- scripts/common.sh@354 -- # echo 1 00:03:37.033 04:18:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:37.033 04:18:40 -- scripts/common.sh@365 -- # decimal 2 00:03:37.033 04:18:40 -- scripts/common.sh@352 -- # local d=2 00:03:37.033 04:18:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.033 04:18:40 -- scripts/common.sh@354 -- # echo 2 00:03:37.033 04:18:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:37.033 04:18:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:37.033 04:18:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:37.033 04:18:40 -- scripts/common.sh@367 -- # return 0 00:03:37.033 04:18:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.033 04:18:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:37.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.033 --rc genhtml_branch_coverage=1 00:03:37.033 --rc genhtml_function_coverage=1 00:03:37.033 --rc genhtml_legend=1 00:03:37.033 --rc geninfo_all_blocks=1 00:03:37.033 --rc geninfo_unexecuted_blocks=1 00:03:37.033 00:03:37.033 ' 00:03:37.033 04:18:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:37.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.033 --rc genhtml_branch_coverage=1 00:03:37.033 --rc genhtml_function_coverage=1 00:03:37.033 --rc genhtml_legend=1 00:03:37.033 --rc geninfo_all_blocks=1 00:03:37.033 --rc geninfo_unexecuted_blocks=1 00:03:37.033 00:03:37.033 ' 00:03:37.033 04:18:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:37.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.033 --rc genhtml_branch_coverage=1 00:03:37.033 --rc genhtml_function_coverage=1 00:03:37.033 --rc genhtml_legend=1 00:03:37.033 --rc geninfo_all_blocks=1 00:03:37.033 --rc geninfo_unexecuted_blocks=1 00:03:37.033 00:03:37.033 ' 00:03:37.033 04:18:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:37.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.033 --rc genhtml_branch_coverage=1 00:03:37.033 --rc genhtml_function_coverage=1 00:03:37.033 --rc genhtml_legend=1 00:03:37.033 --rc geninfo_all_blocks=1 00:03:37.033 --rc geninfo_unexecuted_blocks=1 00:03:37.033 00:03:37.033 ' 00:03:37.033 04:18:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:37.033 04:18:40 -- nvmf/common.sh@7 -- # uname -s 00:03:37.034 04:18:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.034 04:18:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.034 04:18:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.034 04:18:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.034 04:18:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.034 04:18:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.034 04:18:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.034 04:18:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.034 04:18:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.034 04:18:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.034 04:18:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:03:37.034 04:18:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:03:37.034 04:18:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.034 04:18:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.034 04:18:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:37.034 04:18:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:37.034 04:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.034 04:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.034 04:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.034 04:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.034 04:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.034 04:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.034 04:18:40 -- paths/export.sh@5 -- # export PATH 00:03:37.034 04:18:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.034 04:18:40 -- nvmf/common.sh@46 -- # : 0 00:03:37.034 04:18:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:37.034 04:18:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:37.034 04:18:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:37.034 04:18:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.034 04:18:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.034 04:18:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:37.034 04:18:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:37.034 04:18:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:37.034 04:18:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:37.034 04:18:40 -- spdk/autotest.sh@32 -- # uname -s 00:03:37.034 04:18:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:37.034 04:18:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:37.034 04:18:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.034 04:18:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:37.034 04:18:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.034 04:18:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.034 04:18:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.034 04:18:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:37.034 04:18:40 -- spdk/autotest.sh@48 -- # udevadm_pid=48014 00:03:37.034 04:18:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:37.034 04:18:40 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:37.292 04:18:40 -- spdk/autotest.sh@54 -- # echo 48032 00:03:37.292 04:18:40 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:37.292 04:18:40 -- spdk/autotest.sh@56 -- # echo 48035 00:03:37.292 04:18:40 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:37.292 04:18:40 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:37.292 04:18:40 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:37.292 04:18:40 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:37.292 04:18:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:37.292 04:18:40 -- common/autotest_common.sh@10 -- # set +x 00:03:37.292 04:18:40 -- spdk/autotest.sh@70 -- # create_test_list 00:03:37.292 04:18:40 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:37.292 04:18:40 -- common/autotest_common.sh@10 -- # set +x 00:03:37.292 04:18:40 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:37.292 04:18:40 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:37.292 04:18:40 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:37.292 04:18:40 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:37.292 04:18:40 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:37.292 04:18:40 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:37.292 04:18:40 -- common/autotest_common.sh@1450 -- # uname 00:03:37.292 04:18:40 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:37.292 04:18:40 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:37.292 04:18:40 -- common/autotest_common.sh@1470 -- # uname 00:03:37.292 04:18:40 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:37.292 04:18:40 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:37.292 04:18:40 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:37.292 lcov: LCOV version 1.15 00:03:37.292 04:18:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.404 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:45.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:45.404 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:45.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:45.405 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:45.405 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:03.503 04:19:06 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:03.503 04:19:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.503 04:19:06 -- common/autotest_common.sh@10 -- # set +x 00:04:03.503 04:19:06 -- spdk/autotest.sh@89 -- # rm -f 00:04:03.503 04:19:06 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.070 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:04.070 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:04.070 04:19:07 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:04.070 04:19:07 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:04.070 04:19:07 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:04.070 04:19:07 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:04.070 04:19:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:04.070 04:19:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:04.070 04:19:07 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:04.070 04:19:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.070 04:19:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:04.071 04:19:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:04.071 04:19:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:04.071 04:19:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:04.071 04:19:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:04.071 04:19:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:04.071 04:19:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:04.071 04:19:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:04.071 04:19:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:04.071 04:19:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:04.071 04:19:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:04.071 04:19:07 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:04.071 04:19:07 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:04.071 04:19:07 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:04.071 04:19:07 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:04.071 04:19:07 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:04.071 04:19:07 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:04.330 04:19:07 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:04.330 04:19:07 -- spdk/autotest.sh@108 -- # grep -v p 00:04:04.330 04:19:07 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:04.330 04:19:07 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:04.330 04:19:07 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:04.330 04:19:07 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:04.330 04:19:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:04.330 No valid GPT data, bailing 00:04:04.330 04:19:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.330 04:19:07 -- scripts/common.sh@393 -- # pt= 00:04:04.330 04:19:07 -- scripts/common.sh@394 -- # return 1 00:04:04.330 04:19:07 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:04.330 1+0 records in 00:04:04.330 1+0 records out 00:04:04.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551016 s, 190 MB/s 00:04:04.330 04:19:07 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:04.330 04:19:07 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:04.330 04:19:07 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:04.330 04:19:07 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:04.330 04:19:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:04.330 No valid GPT data, bailing 00:04:04.330 04:19:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:04.330 04:19:07 -- scripts/common.sh@393 -- # pt= 00:04:04.330 04:19:07 -- scripts/common.sh@394 -- # return 1 00:04:04.330 04:19:07 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:04.330 1+0 records in 00:04:04.330 1+0 records out 00:04:04.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00310698 s, 337 MB/s 00:04:04.330 04:19:07 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:04.330 04:19:07 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:04.330 04:19:07 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:04.330 04:19:07 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:04.330 04:19:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:04.330 No valid GPT data, bailing 00:04:04.330 04:19:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:04.330 04:19:07 -- scripts/common.sh@393 -- # pt= 00:04:04.330 04:19:07 -- scripts/common.sh@394 -- # return 1 00:04:04.330 04:19:07 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:04.330 1+0 records in 00:04:04.330 1+0 records out 00:04:04.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441433 s, 238 MB/s 00:04:04.330 04:19:07 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:04.330 04:19:07 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:04.330 04:19:07 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:04.330 04:19:07 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:04.330 04:19:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:04.589 No valid GPT data, bailing 00:04:04.589 04:19:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:04.589 04:19:07 -- scripts/common.sh@393 -- # pt= 00:04:04.589 04:19:07 -- scripts/common.sh@394 -- # return 1 00:04:04.589 04:19:07 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:04.589 1+0 records in 00:04:04.589 1+0 records out 00:04:04.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444359 s, 236 MB/s 00:04:04.589 04:19:07 -- spdk/autotest.sh@116 -- # sync 00:04:04.848 04:19:07 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:04.848 04:19:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:04.849 04:19:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:06.754 04:19:09 -- spdk/autotest.sh@122 -- # uname -s 00:04:06.754 04:19:09 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:06.754 04:19:09 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:06.754 04:19:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.754 04:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.754 04:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 ************************************ 00:04:06.754 START TEST setup.sh 00:04:06.754 ************************************ 00:04:06.754 04:19:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:06.754 * Looking for test storage... 00:04:06.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:06.754 04:19:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:06.754 04:19:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:06.754 04:19:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:06.754 04:19:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:06.754 04:19:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:06.754 04:19:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:06.754 04:19:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:06.754 04:19:09 -- scripts/common.sh@335 -- # IFS=.-: 00:04:06.754 04:19:09 -- scripts/common.sh@335 -- # read -ra ver1 00:04:06.754 04:19:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.754 04:19:09 -- scripts/common.sh@336 -- # read -ra ver2 00:04:06.754 04:19:09 -- scripts/common.sh@337 -- # local 'op=<' 00:04:06.754 04:19:09 -- scripts/common.sh@339 -- # ver1_l=2 00:04:06.754 04:19:09 -- scripts/common.sh@340 -- # ver2_l=1 00:04:06.754 04:19:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:06.754 04:19:09 -- scripts/common.sh@343 -- # case "$op" in 00:04:06.754 04:19:09 -- scripts/common.sh@344 -- # : 1 00:04:06.754 04:19:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:06.754 04:19:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.754 04:19:09 -- scripts/common.sh@364 -- # decimal 1 00:04:06.754 04:19:09 -- scripts/common.sh@352 -- # local d=1 00:04:06.754 04:19:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.754 04:19:09 -- scripts/common.sh@354 -- # echo 1 00:04:06.754 04:19:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:06.754 04:19:09 -- scripts/common.sh@365 -- # decimal 2 00:04:06.754 04:19:09 -- scripts/common.sh@352 -- # local d=2 00:04:06.754 04:19:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.754 04:19:09 -- scripts/common.sh@354 -- # echo 2 00:04:06.754 04:19:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:06.754 04:19:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:06.754 04:19:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:06.754 04:19:09 -- scripts/common.sh@367 -- # return 0 00:04:06.754 04:19:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.754 04:19:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:06.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.754 --rc genhtml_branch_coverage=1 00:04:06.754 --rc genhtml_function_coverage=1 00:04:06.754 --rc genhtml_legend=1 00:04:06.754 --rc geninfo_all_blocks=1 00:04:06.754 --rc geninfo_unexecuted_blocks=1 00:04:06.754 00:04:06.754 ' 00:04:06.754 04:19:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:06.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.754 --rc genhtml_branch_coverage=1 00:04:06.754 --rc genhtml_function_coverage=1 00:04:06.754 --rc genhtml_legend=1 00:04:06.754 --rc geninfo_all_blocks=1 00:04:06.754 --rc geninfo_unexecuted_blocks=1 00:04:06.754 00:04:06.754 ' 00:04:06.754 04:19:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:06.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.754 --rc genhtml_branch_coverage=1 00:04:06.754 --rc genhtml_function_coverage=1 00:04:06.754 --rc genhtml_legend=1 00:04:06.754 --rc geninfo_all_blocks=1 00:04:06.754 --rc geninfo_unexecuted_blocks=1 00:04:06.754 00:04:06.754 ' 00:04:06.754 04:19:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:06.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.754 --rc genhtml_branch_coverage=1 00:04:06.754 --rc genhtml_function_coverage=1 00:04:06.754 --rc genhtml_legend=1 00:04:06.754 --rc geninfo_all_blocks=1 00:04:06.754 --rc geninfo_unexecuted_blocks=1 00:04:06.754 00:04:06.754 ' 00:04:06.754 04:19:09 -- setup/test-setup.sh@10 -- # uname -s 00:04:06.754 04:19:09 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:06.754 04:19:09 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:06.754 04:19:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.754 04:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.754 04:19:09 -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 ************************************ 00:04:06.754 START TEST acl 00:04:06.754 ************************************ 00:04:06.754 04:19:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:06.754 * Looking for test storage... 00:04:06.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:06.754 04:19:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:06.755 04:19:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:06.755 04:19:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.014 04:19:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.014 04:19:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.014 04:19:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.014 04:19:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.014 04:19:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.014 04:19:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.014 04:19:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.014 04:19:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.014 04:19:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.014 04:19:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.014 04:19:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.014 04:19:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.014 04:19:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.014 04:19:10 -- scripts/common.sh@344 -- # : 1 00:04:07.014 04:19:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.014 04:19:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.014 04:19:10 -- scripts/common.sh@364 -- # decimal 1 00:04:07.014 04:19:10 -- scripts/common.sh@352 -- # local d=1 00:04:07.014 04:19:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.014 04:19:10 -- scripts/common.sh@354 -- # echo 1 00:04:07.014 04:19:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.014 04:19:10 -- scripts/common.sh@365 -- # decimal 2 00:04:07.014 04:19:10 -- scripts/common.sh@352 -- # local d=2 00:04:07.014 04:19:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.014 04:19:10 -- scripts/common.sh@354 -- # echo 2 00:04:07.014 04:19:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.014 04:19:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.014 04:19:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.014 04:19:10 -- scripts/common.sh@367 -- # return 0 00:04:07.014 04:19:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.014 04:19:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.014 --rc genhtml_branch_coverage=1 00:04:07.014 --rc genhtml_function_coverage=1 00:04:07.014 --rc genhtml_legend=1 00:04:07.014 --rc geninfo_all_blocks=1 00:04:07.014 --rc geninfo_unexecuted_blocks=1 00:04:07.014 00:04:07.014 ' 00:04:07.014 04:19:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.014 --rc genhtml_branch_coverage=1 00:04:07.014 --rc genhtml_function_coverage=1 00:04:07.014 --rc genhtml_legend=1 00:04:07.014 --rc geninfo_all_blocks=1 00:04:07.014 --rc geninfo_unexecuted_blocks=1 00:04:07.014 00:04:07.014 ' 00:04:07.014 04:19:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.014 --rc genhtml_branch_coverage=1 00:04:07.014 --rc genhtml_function_coverage=1 00:04:07.014 --rc genhtml_legend=1 00:04:07.014 --rc geninfo_all_blocks=1 00:04:07.014 --rc geninfo_unexecuted_blocks=1 00:04:07.014 00:04:07.014 ' 00:04:07.014 04:19:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.014 --rc genhtml_branch_coverage=1 00:04:07.014 --rc genhtml_function_coverage=1 00:04:07.014 --rc genhtml_legend=1 00:04:07.014 --rc geninfo_all_blocks=1 00:04:07.014 --rc geninfo_unexecuted_blocks=1 00:04:07.014 00:04:07.014 ' 00:04:07.014 04:19:10 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:07.014 04:19:10 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:07.014 04:19:10 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:07.014 04:19:10 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:07.014 04:19:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.014 04:19:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:07.014 04:19:10 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:07.014 04:19:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.014 04:19:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.014 04:19:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.014 04:19:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:07.014 04:19:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:07.014 04:19:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:07.014 04:19:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.014 04:19:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.014 04:19:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:07.014 04:19:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:07.014 04:19:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:07.014 04:19:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.014 04:19:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.014 04:19:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:07.014 04:19:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:07.014 04:19:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:07.014 04:19:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.014 04:19:10 -- setup/acl.sh@12 -- # devs=() 00:04:07.014 04:19:10 -- setup/acl.sh@12 -- # declare -a devs 00:04:07.014 04:19:10 -- setup/acl.sh@13 -- # drivers=() 00:04:07.014 04:19:10 -- setup/acl.sh@13 -- # declare -A drivers 00:04:07.014 04:19:10 -- setup/acl.sh@51 -- # setup reset 00:04:07.014 04:19:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.014 04:19:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.582 04:19:10 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:07.582 04:19:10 -- setup/acl.sh@16 -- # local dev driver 00:04:07.582 04:19:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.582 04:19:10 -- setup/acl.sh@15 -- # setup output status 00:04:07.582 04:19:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.582 04:19:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:07.842 Hugepages 00:04:07.842 node hugesize free / total 00:04:07.842 04:19:10 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:07.842 04:19:10 -- setup/acl.sh@19 -- # continue 00:04:07.842 04:19:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.842 00:04:07.842 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.842 04:19:10 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:07.842 04:19:10 -- setup/acl.sh@19 -- # continue 00:04:07.842 04:19:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.842 04:19:11 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:07.842 04:19:11 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:07.842 04:19:11 -- setup/acl.sh@20 -- # continue 00:04:07.842 04:19:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.102 04:19:11 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:08.102 04:19:11 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:08.102 04:19:11 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:08.102 04:19:11 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:08.102 04:19:11 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:08.102 04:19:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.102 04:19:11 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:08.102 04:19:11 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:08.102 04:19:11 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:08.102 04:19:11 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:08.102 04:19:11 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:08.102 04:19:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.102 04:19:11 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:08.102 04:19:11 -- setup/acl.sh@54 -- # run_test denied denied 00:04:08.102 04:19:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.102 04:19:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.102 04:19:11 -- common/autotest_common.sh@10 -- # set +x 00:04:08.102 ************************************ 00:04:08.102 START TEST denied 00:04:08.102 ************************************ 00:04:08.102 04:19:11 -- common/autotest_common.sh@1114 -- # denied 00:04:08.102 04:19:11 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:08.102 04:19:11 -- setup/acl.sh@38 -- # setup output config 00:04:08.102 04:19:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.102 04:19:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.102 04:19:11 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:09.059 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:09.059 04:19:12 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:09.059 04:19:12 -- setup/acl.sh@28 -- # local dev driver 00:04:09.059 04:19:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:09.059 04:19:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:09.059 04:19:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:09.059 04:19:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:09.059 04:19:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:09.059 04:19:12 -- setup/acl.sh@41 -- # setup reset 00:04:09.059 04:19:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.059 04:19:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.626 ************************************ 00:04:09.627 END TEST denied 00:04:09.627 ************************************ 00:04:09.627 00:04:09.627 real 0m1.454s 00:04:09.627 user 0m0.611s 00:04:09.627 sys 0m0.797s 00:04:09.627 04:19:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.627 04:19:12 -- common/autotest_common.sh@10 -- # set +x 00:04:09.627 04:19:12 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:09.627 04:19:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.627 04:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.627 04:19:12 -- common/autotest_common.sh@10 -- # set +x 00:04:09.627 ************************************ 00:04:09.627 START TEST allowed 00:04:09.627 ************************************ 00:04:09.627 04:19:12 -- common/autotest_common.sh@1114 -- # allowed 00:04:09.627 04:19:12 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:09.627 04:19:12 -- setup/acl.sh@45 -- # setup output config 00:04:09.627 04:19:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.627 04:19:12 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:09.627 04:19:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.564 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.564 04:19:13 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:10.564 04:19:13 -- setup/acl.sh@28 -- # local dev driver 00:04:10.564 04:19:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:10.564 04:19:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:10.564 04:19:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:10.564 04:19:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:10.564 04:19:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:10.564 04:19:13 -- setup/acl.sh@48 -- # setup reset 00:04:10.564 04:19:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.564 04:19:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.132 00:04:11.132 real 0m1.502s 00:04:11.132 user 0m0.660s 00:04:11.132 sys 0m0.835s 00:04:11.132 04:19:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.132 ************************************ 00:04:11.132 END TEST allowed 00:04:11.132 ************************************ 00:04:11.132 04:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:11.132 ************************************ 00:04:11.132 END TEST acl 00:04:11.132 ************************************ 00:04:11.132 00:04:11.132 real 0m4.329s 00:04:11.132 user 0m1.930s 00:04:11.132 sys 0m2.371s 00:04:11.132 04:19:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.132 04:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:11.132 04:19:14 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:11.132 04:19:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.132 04:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.132 04:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:11.132 ************************************ 00:04:11.132 START TEST hugepages 00:04:11.132 ************************************ 00:04:11.132 04:19:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:11.132 * Looking for test storage... 00:04:11.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:11.132 04:19:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:11.132 04:19:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:11.132 04:19:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:11.392 04:19:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:11.392 04:19:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:11.392 04:19:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:11.392 04:19:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:11.392 04:19:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:11.392 04:19:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:11.392 04:19:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.392 04:19:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:11.392 04:19:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:11.392 04:19:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:11.392 04:19:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:11.392 04:19:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:11.392 04:19:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:11.392 04:19:14 -- scripts/common.sh@344 -- # : 1 00:04:11.392 04:19:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:11.392 04:19:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.392 04:19:14 -- scripts/common.sh@364 -- # decimal 1 00:04:11.392 04:19:14 -- scripts/common.sh@352 -- # local d=1 00:04:11.392 04:19:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.392 04:19:14 -- scripts/common.sh@354 -- # echo 1 00:04:11.392 04:19:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:11.392 04:19:14 -- scripts/common.sh@365 -- # decimal 2 00:04:11.392 04:19:14 -- scripts/common.sh@352 -- # local d=2 00:04:11.392 04:19:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.392 04:19:14 -- scripts/common.sh@354 -- # echo 2 00:04:11.392 04:19:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:11.392 04:19:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:11.392 04:19:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:11.392 04:19:14 -- scripts/common.sh@367 -- # return 0 00:04:11.392 04:19:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.392 04:19:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.392 --rc genhtml_branch_coverage=1 00:04:11.392 --rc genhtml_function_coverage=1 00:04:11.392 --rc genhtml_legend=1 00:04:11.392 --rc geninfo_all_blocks=1 00:04:11.392 --rc geninfo_unexecuted_blocks=1 00:04:11.392 00:04:11.392 ' 00:04:11.392 04:19:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.392 --rc genhtml_branch_coverage=1 00:04:11.392 --rc genhtml_function_coverage=1 00:04:11.392 --rc genhtml_legend=1 00:04:11.392 --rc geninfo_all_blocks=1 00:04:11.392 --rc geninfo_unexecuted_blocks=1 00:04:11.392 00:04:11.392 ' 00:04:11.392 04:19:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.392 --rc genhtml_branch_coverage=1 00:04:11.392 --rc genhtml_function_coverage=1 00:04:11.392 --rc genhtml_legend=1 00:04:11.392 --rc geninfo_all_blocks=1 00:04:11.392 --rc geninfo_unexecuted_blocks=1 00:04:11.392 00:04:11.392 ' 00:04:11.392 04:19:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:11.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.392 --rc genhtml_branch_coverage=1 00:04:11.392 --rc genhtml_function_coverage=1 00:04:11.392 --rc genhtml_legend=1 00:04:11.392 --rc geninfo_all_blocks=1 00:04:11.392 --rc geninfo_unexecuted_blocks=1 00:04:11.392 00:04:11.392 ' 00:04:11.392 04:19:14 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:11.392 04:19:14 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:11.392 04:19:14 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:11.392 04:19:14 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:11.392 04:19:14 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:11.392 04:19:14 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:11.392 04:19:14 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:11.392 04:19:14 -- setup/common.sh@18 -- # local node= 00:04:11.392 04:19:14 -- setup/common.sh@19 -- # local var val 00:04:11.392 04:19:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.392 04:19:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.392 04:19:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.392 04:19:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.392 04:19:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.393 04:19:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 5976632 kB' 'MemAvailable: 7358864 kB' 'Buffers: 2684 kB' 'Cached: 1595844 kB' 'SwapCached: 0 kB' 'Active: 455116 kB' 'Inactive: 1260188 kB' 'Active(anon): 127284 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 51012 kB' 'Shmem: 10508 kB' 'KReclaimable: 62316 kB' 'Slab: 156440 kB' 'SReclaimable: 62316 kB' 'SUnreclaim: 94124 kB' 'KernelStack: 6448 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 321064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.393 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.393 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # continue 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.394 04:19:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.394 04:19:14 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:11.394 04:19:14 -- setup/common.sh@33 -- # echo 2048 00:04:11.394 04:19:14 -- setup/common.sh@33 -- # return 0 00:04:11.394 04:19:14 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:11.394 04:19:14 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:11.394 04:19:14 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:11.394 04:19:14 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:11.394 04:19:14 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:11.394 04:19:14 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:11.394 04:19:14 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:11.394 04:19:14 -- setup/hugepages.sh@207 -- # get_nodes 00:04:11.394 04:19:14 -- setup/hugepages.sh@27 -- # local node 00:04:11.394 04:19:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.394 04:19:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:11.394 04:19:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.394 04:19:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.394 04:19:14 -- setup/hugepages.sh@208 -- # clear_hp 00:04:11.394 04:19:14 -- setup/hugepages.sh@37 -- # local node hp 00:04:11.394 04:19:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:11.394 04:19:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.394 04:19:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.394 04:19:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:11.394 04:19:14 -- setup/hugepages.sh@41 -- # echo 0 00:04:11.394 04:19:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:11.394 04:19:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:11.394 04:19:14 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:11.394 04:19:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.394 04:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.394 04:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:11.394 ************************************ 00:04:11.394 START TEST default_setup 00:04:11.394 ************************************ 00:04:11.394 04:19:14 -- common/autotest_common.sh@1114 -- # default_setup 00:04:11.394 04:19:14 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:11.394 04:19:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.394 04:19:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.394 04:19:14 -- setup/hugepages.sh@51 -- # shift 00:04:11.394 04:19:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.394 04:19:14 -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.394 04:19:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.394 04:19:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.394 04:19:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.394 04:19:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.394 04:19:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.394 04:19:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.394 04:19:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.394 04:19:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.394 04:19:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.394 04:19:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.394 04:19:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.394 04:19:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.394 04:19:14 -- setup/hugepages.sh@73 -- # return 0 00:04:11.394 04:19:14 -- setup/hugepages.sh@137 -- # setup output 00:04:11.394 04:19:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.394 04:19:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.226 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.226 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.226 04:19:15 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:12.226 04:19:15 -- setup/hugepages.sh@89 -- # local node 00:04:12.226 04:19:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.226 04:19:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.226 04:19:15 -- setup/hugepages.sh@92 -- # local surp 00:04:12.226 04:19:15 -- setup/hugepages.sh@93 -- # local resv 00:04:12.226 04:19:15 -- setup/hugepages.sh@94 -- # local anon 00:04:12.226 04:19:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.226 04:19:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.226 04:19:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.226 04:19:15 -- setup/common.sh@18 -- # local node= 00:04:12.226 04:19:15 -- setup/common.sh@19 -- # local var val 00:04:12.226 04:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.226 04:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.226 04:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.226 04:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.226 04:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.226 04:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.226 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 04:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8090872 kB' 'MemAvailable: 9472968 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456608 kB' 'Inactive: 1260200 kB' 'Active(anon): 128776 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119888 kB' 'Mapped: 50916 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156280 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94256 kB' 'KernelStack: 6416 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:12.226 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 04:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.226 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.226 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.226 04:19:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.226 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.226 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.226 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.227 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.227 04:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.227 04:19:15 -- setup/common.sh@33 -- # echo 0 00:04:12.227 04:19:15 -- setup/common.sh@33 -- # return 0 00:04:12.227 04:19:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.227 04:19:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.227 04:19:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.227 04:19:15 -- setup/common.sh@18 -- # local node= 00:04:12.227 04:19:15 -- setup/common.sh@19 -- # local var val 00:04:12.227 04:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.227 04:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.227 04:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.228 04:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.228 04:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.228 04:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8091324 kB' 'MemAvailable: 9473420 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456232 kB' 'Inactive: 1260200 kB' 'Active(anon): 128400 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119508 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156248 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94224 kB' 'KernelStack: 6400 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.228 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.228 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.229 04:19:15 -- setup/common.sh@33 -- # echo 0 00:04:12.229 04:19:15 -- setup/common.sh@33 -- # return 0 00:04:12.229 04:19:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.229 04:19:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.229 04:19:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.229 04:19:15 -- setup/common.sh@18 -- # local node= 00:04:12.229 04:19:15 -- setup/common.sh@19 -- # local var val 00:04:12.229 04:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.229 04:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.229 04:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.229 04:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.229 04:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.229 04:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8091324 kB' 'MemAvailable: 9473420 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456200 kB' 'Inactive: 1260200 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156248 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94224 kB' 'KernelStack: 6400 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.229 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.229 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.230 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.230 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.230 04:19:15 -- setup/common.sh@33 -- # echo 0 00:04:12.230 04:19:15 -- setup/common.sh@33 -- # return 0 00:04:12.230 nr_hugepages=1024 00:04:12.230 resv_hugepages=0 00:04:12.230 surplus_hugepages=0 00:04:12.230 anon_hugepages=0 00:04:12.230 04:19:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.230 04:19:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.230 04:19:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.230 04:19:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.230 04:19:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.230 04:19:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.230 04:19:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.230 04:19:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.230 04:19:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.230 04:19:15 -- setup/common.sh@18 -- # local node= 00:04:12.230 04:19:15 -- setup/common.sh@19 -- # local var val 00:04:12.230 04:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.230 04:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.230 04:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.230 04:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.230 04:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.230 04:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8091324 kB' 'MemAvailable: 9473420 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456480 kB' 'Inactive: 1260200 kB' 'Active(anon): 128648 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119812 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156244 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94220 kB' 'KernelStack: 6416 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.490 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.490 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.491 04:19:15 -- setup/common.sh@33 -- # echo 1024 00:04:12.491 04:19:15 -- setup/common.sh@33 -- # return 0 00:04:12.491 04:19:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.491 04:19:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.491 04:19:15 -- setup/hugepages.sh@27 -- # local node 00:04:12.491 04:19:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.491 04:19:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.491 04:19:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.491 04:19:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.491 04:19:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.491 04:19:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.491 04:19:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.491 04:19:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.491 04:19:15 -- setup/common.sh@18 -- # local node=0 00:04:12.491 04:19:15 -- setup/common.sh@19 -- # local var val 00:04:12.491 04:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.491 04:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.491 04:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.491 04:19:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.491 04:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.491 04:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8091324 kB' 'MemUsed: 4147796 kB' 'SwapCached: 0 kB' 'Active: 456168 kB' 'Inactive: 1260200 kB' 'Active(anon): 128336 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 1598516 kB' 'Mapped: 50792 kB' 'AnonPages: 119744 kB' 'Shmem: 10484 kB' 'KernelStack: 6400 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62024 kB' 'Slab: 156244 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.491 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.491 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.492 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.492 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.492 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.492 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.492 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.492 04:19:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.492 04:19:15 -- setup/common.sh@33 -- # echo 0 00:04:12.492 04:19:15 -- setup/common.sh@33 -- # return 0 00:04:12.492 04:19:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.492 04:19:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.492 04:19:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.492 node0=1024 expecting 1024 00:04:12.492 ************************************ 00:04:12.492 END TEST default_setup 00:04:12.492 ************************************ 00:04:12.492 04:19:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.492 04:19:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.492 04:19:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.492 00:04:12.492 real 0m0.999s 00:04:12.492 user 0m0.471s 00:04:12.492 sys 0m0.457s 00:04:12.492 04:19:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.492 04:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:12.492 04:19:15 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:12.492 04:19:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.492 04:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.492 04:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:12.492 ************************************ 00:04:12.492 START TEST per_node_1G_alloc 00:04:12.492 ************************************ 00:04:12.492 04:19:15 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:12.492 04:19:15 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:12.492 04:19:15 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:12.492 04:19:15 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:12.492 04:19:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:12.492 04:19:15 -- setup/hugepages.sh@51 -- # shift 00:04:12.492 04:19:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:12.492 04:19:15 -- setup/hugepages.sh@52 -- # local node_ids 00:04:12.492 04:19:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.492 04:19:15 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:12.492 04:19:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:12.492 04:19:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:12.492 04:19:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.492 04:19:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.492 04:19:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.492 04:19:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.492 04:19:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.492 04:19:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:12.492 04:19:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:12.492 04:19:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:12.492 04:19:15 -- setup/hugepages.sh@73 -- # return 0 00:04:12.492 04:19:15 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:12.492 04:19:15 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:12.492 04:19:15 -- setup/hugepages.sh@146 -- # setup output 00:04:12.492 04:19:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.492 04:19:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.750 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.750 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.750 04:19:15 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:12.750 04:19:15 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:12.750 04:19:15 -- setup/hugepages.sh@89 -- # local node 00:04:12.750 04:19:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.750 04:19:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.750 04:19:15 -- setup/hugepages.sh@92 -- # local surp 00:04:12.750 04:19:15 -- setup/hugepages.sh@93 -- # local resv 00:04:12.750 04:19:15 -- setup/hugepages.sh@94 -- # local anon 00:04:12.750 04:19:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.750 04:19:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.750 04:19:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.750 04:19:15 -- setup/common.sh@18 -- # local node= 00:04:12.750 04:19:15 -- setup/common.sh@19 -- # local var val 00:04:12.750 04:19:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.750 04:19:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.750 04:19:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.750 04:19:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.750 04:19:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.750 04:19:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9140772 kB' 'MemAvailable: 10522868 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456816 kB' 'Inactive: 1260200 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 120096 kB' 'Mapped: 50888 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156248 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94224 kB' 'KernelStack: 6376 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.750 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.750 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.751 04:19:15 -- setup/common.sh@32 -- # continue 00:04:12.751 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.010 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.011 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.011 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.011 04:19:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.011 04:19:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.011 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.011 04:19:16 -- setup/common.sh@18 -- # local node= 00:04:13.011 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.011 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.011 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.011 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.011 04:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.011 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.011 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9140520 kB' 'MemAvailable: 10522616 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456496 kB' 'Inactive: 1260200 kB' 'Active(anon): 128664 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50844 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156268 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94244 kB' 'KernelStack: 6416 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.012 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.012 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.012 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.012 04:19:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.012 04:19:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.012 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.012 04:19:16 -- setup/common.sh@18 -- # local node= 00:04:13.012 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.012 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.012 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.012 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.012 04:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.012 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.012 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.012 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.012 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9140520 kB' 'MemAvailable: 10522616 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456488 kB' 'Inactive: 1260200 kB' 'Active(anon): 128656 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156260 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94236 kB' 'KernelStack: 6416 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.013 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.013 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.014 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.014 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.014 nr_hugepages=512 00:04:13.014 resv_hugepages=0 00:04:13.014 surplus_hugepages=0 00:04:13.014 anon_hugepages=0 00:04:13.014 04:19:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.014 04:19:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:13.014 04:19:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.014 04:19:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.014 04:19:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.014 04:19:16 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.014 04:19:16 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:13.014 04:19:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.014 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.014 04:19:16 -- setup/common.sh@18 -- # local node= 00:04:13.014 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.014 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.014 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.014 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.014 04:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.014 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.014 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9140520 kB' 'MemAvailable: 10522616 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456504 kB' 'Inactive: 1260200 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156260 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94236 kB' 'KernelStack: 6416 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.014 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.014 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.015 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.015 04:19:16 -- setup/common.sh@33 -- # echo 512 00:04:13.015 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.015 04:19:16 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.015 04:19:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.015 04:19:16 -- setup/hugepages.sh@27 -- # local node 00:04:13.015 04:19:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.015 04:19:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.015 04:19:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.015 04:19:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.015 04:19:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.015 04:19:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.015 04:19:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.015 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.015 04:19:16 -- setup/common.sh@18 -- # local node=0 00:04:13.015 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.015 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.015 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.015 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.015 04:19:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.015 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.015 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.015 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9140784 kB' 'MemUsed: 3098336 kB' 'SwapCached: 0 kB' 'Active: 456556 kB' 'Inactive: 1260200 kB' 'Active(anon): 128724 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 1598516 kB' 'Mapped: 50792 kB' 'AnonPages: 119828 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62024 kB' 'Slab: 156256 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.016 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.016 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.016 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.016 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.016 04:19:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.016 04:19:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.016 04:19:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.016 node0=512 expecting 512 00:04:13.016 ************************************ 00:04:13.016 END TEST per_node_1G_alloc 00:04:13.016 ************************************ 00:04:13.016 04:19:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.016 04:19:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.016 04:19:16 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.017 00:04:13.017 real 0m0.583s 00:04:13.017 user 0m0.282s 00:04:13.017 sys 0m0.305s 00:04:13.017 04:19:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.017 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.017 04:19:16 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:13.017 04:19:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.017 04:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.017 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.017 ************************************ 00:04:13.017 START TEST even_2G_alloc 00:04:13.017 ************************************ 00:04:13.017 04:19:16 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:13.017 04:19:16 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:13.017 04:19:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.017 04:19:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.017 04:19:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.017 04:19:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.017 04:19:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.017 04:19:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.017 04:19:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.017 04:19:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.017 04:19:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.017 04:19:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.017 04:19:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.017 04:19:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.017 04:19:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.017 04:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.017 04:19:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:13.017 04:19:16 -- setup/hugepages.sh@83 -- # : 0 00:04:13.017 04:19:16 -- setup/hugepages.sh@84 -- # : 0 00:04:13.017 04:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.017 04:19:16 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:13.017 04:19:16 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:13.017 04:19:16 -- setup/hugepages.sh@153 -- # setup output 00:04:13.017 04:19:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.017 04:19:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.585 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.585 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.585 04:19:16 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:13.585 04:19:16 -- setup/hugepages.sh@89 -- # local node 00:04:13.585 04:19:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.585 04:19:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.585 04:19:16 -- setup/hugepages.sh@92 -- # local surp 00:04:13.585 04:19:16 -- setup/hugepages.sh@93 -- # local resv 00:04:13.585 04:19:16 -- setup/hugepages.sh@94 -- # local anon 00:04:13.585 04:19:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.585 04:19:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.585 04:19:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.585 04:19:16 -- setup/common.sh@18 -- # local node= 00:04:13.585 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.585 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.585 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.585 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.585 04:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.585 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.585 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8096776 kB' 'MemAvailable: 9478872 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 1260200 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119908 kB' 'Mapped: 50884 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156204 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94180 kB' 'KernelStack: 6376 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.585 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.585 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.586 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.586 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.586 04:19:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.586 04:19:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.586 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.586 04:19:16 -- setup/common.sh@18 -- # local node= 00:04:13.586 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.586 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.586 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.586 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.586 04:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.586 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.586 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8096528 kB' 'MemAvailable: 9478624 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456660 kB' 'Inactive: 1260200 kB' 'Active(anon): 128828 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119936 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156224 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94200 kB' 'KernelStack: 6416 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.586 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.586 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.587 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.587 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.587 04:19:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.587 04:19:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.587 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.587 04:19:16 -- setup/common.sh@18 -- # local node= 00:04:13.587 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.587 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.587 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.587 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.587 04:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.587 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.587 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8096528 kB' 'MemAvailable: 9478624 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456472 kB' 'Inactive: 1260200 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119728 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156220 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94196 kB' 'KernelStack: 6400 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.587 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.587 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.588 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.588 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.588 04:19:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.588 nr_hugepages=1024 00:04:13.588 04:19:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.588 resv_hugepages=0 00:04:13.588 surplus_hugepages=0 00:04:13.588 anon_hugepages=0 00:04:13.588 04:19:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.588 04:19:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.588 04:19:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.588 04:19:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.588 04:19:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.588 04:19:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.588 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.588 04:19:16 -- setup/common.sh@18 -- # local node= 00:04:13.588 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.588 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.588 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.588 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.588 04:19:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.588 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.588 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8096528 kB' 'MemAvailable: 9478624 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456464 kB' 'Inactive: 1260200 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 119720 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62024 kB' 'Slab: 156220 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94196 kB' 'KernelStack: 6400 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.588 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.588 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.589 04:19:16 -- setup/common.sh@33 -- # echo 1024 00:04:13.589 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.589 04:19:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.589 04:19:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.589 04:19:16 -- setup/hugepages.sh@27 -- # local node 00:04:13.589 04:19:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.589 04:19:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.589 04:19:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.589 04:19:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.589 04:19:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.589 04:19:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.589 04:19:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.589 04:19:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.589 04:19:16 -- setup/common.sh@18 -- # local node=0 00:04:13.589 04:19:16 -- setup/common.sh@19 -- # local var val 00:04:13.589 04:19:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.589 04:19:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.589 04:19:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.589 04:19:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.589 04:19:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.589 04:19:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8096528 kB' 'MemUsed: 4142592 kB' 'SwapCached: 0 kB' 'Active: 456456 kB' 'Inactive: 1260200 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 1598516 kB' 'Mapped: 50792 kB' 'AnonPages: 119720 kB' 'Shmem: 10484 kB' 'KernelStack: 6400 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62024 kB' 'Slab: 156220 kB' 'SReclaimable: 62024 kB' 'SUnreclaim: 94196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.589 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.589 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # continue 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.590 04:19:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.590 04:19:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.590 04:19:16 -- setup/common.sh@33 -- # echo 0 00:04:13.590 04:19:16 -- setup/common.sh@33 -- # return 0 00:04:13.590 04:19:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.590 04:19:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.590 04:19:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.590 04:19:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.590 node0=1024 expecting 1024 00:04:13.590 ************************************ 00:04:13.590 END TEST even_2G_alloc 00:04:13.590 ************************************ 00:04:13.590 04:19:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.590 04:19:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.590 00:04:13.590 real 0m0.580s 00:04:13.590 user 0m0.295s 00:04:13.590 sys 0m0.281s 00:04:13.590 04:19:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.590 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 04:19:16 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:13.848 04:19:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.848 04:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.848 04:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:13.848 ************************************ 00:04:13.848 START TEST odd_alloc 00:04:13.848 ************************************ 00:04:13.848 04:19:16 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:13.848 04:19:16 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:13.848 04:19:16 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:13.848 04:19:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.848 04:19:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.848 04:19:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:13.848 04:19:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.848 04:19:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.848 04:19:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.848 04:19:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:13.848 04:19:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.848 04:19:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.848 04:19:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.848 04:19:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.848 04:19:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.848 04:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.848 04:19:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:13.848 04:19:16 -- setup/hugepages.sh@83 -- # : 0 00:04:13.848 04:19:16 -- setup/hugepages.sh@84 -- # : 0 00:04:13.849 04:19:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.849 04:19:16 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:13.849 04:19:16 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:13.849 04:19:16 -- setup/hugepages.sh@160 -- # setup output 00:04:13.849 04:19:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.849 04:19:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.109 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.109 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.109 04:19:17 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:14.109 04:19:17 -- setup/hugepages.sh@89 -- # local node 00:04:14.109 04:19:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.109 04:19:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.109 04:19:17 -- setup/hugepages.sh@92 -- # local surp 00:04:14.109 04:19:17 -- setup/hugepages.sh@93 -- # local resv 00:04:14.109 04:19:17 -- setup/hugepages.sh@94 -- # local anon 00:04:14.109 04:19:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.109 04:19:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.109 04:19:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.109 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.109 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.109 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.109 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.109 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.109 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.109 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.109 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8105088 kB' 'MemAvailable: 9487192 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456920 kB' 'Inactive: 1260200 kB' 'Active(anon): 129088 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120232 kB' 'Mapped: 50880 kB' 'Shmem: 10484 kB' 'KReclaimable: 62040 kB' 'Slab: 156236 kB' 'SReclaimable: 62040 kB' 'SUnreclaim: 94196 kB' 'KernelStack: 6424 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.110 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.110 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.110 04:19:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:14.110 04:19:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.110 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.110 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.110 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.110 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.110 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.110 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.110 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.110 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.110 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8104596 kB' 'MemAvailable: 9486708 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456524 kB' 'Inactive: 1260200 kB' 'Active(anon): 128692 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156256 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94200 kB' 'KernelStack: 6416 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.111 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.111 04:19:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:14.111 04:19:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.111 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.111 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.111 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.111 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.111 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.111 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.111 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.111 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.111 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8104596 kB' 'MemAvailable: 9486708 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456480 kB' 'Inactive: 1260200 kB' 'Active(anon): 128648 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156256 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94200 kB' 'KernelStack: 6400 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.111 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.112 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.113 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.113 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.113 04:19:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:14.113 04:19:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:14.113 nr_hugepages=1025 00:04:14.113 resv_hugepages=0 00:04:14.113 surplus_hugepages=0 00:04:14.113 anon_hugepages=0 00:04:14.113 04:19:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.113 04:19:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.113 04:19:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.113 04:19:17 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.113 04:19:17 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:14.113 04:19:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.113 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.113 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.113 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.113 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.113 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.113 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.113 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.113 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.113 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.113 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8104848 kB' 'MemAvailable: 9486960 kB' 'Buffers: 2684 kB' 'Cached: 1595832 kB' 'SwapCached: 0 kB' 'Active: 456500 kB' 'Inactive: 1260200 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156256 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94200 kB' 'KernelStack: 6400 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.113 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.113 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.372 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.372 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.372 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.372 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.372 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.372 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.374 04:19:17 -- setup/common.sh@33 -- # echo 1025 00:04:14.374 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.374 04:19:17 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.374 04:19:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.374 04:19:17 -- setup/hugepages.sh@27 -- # local node 00:04:14.374 04:19:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.374 04:19:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:14.374 04:19:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.374 04:19:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.374 04:19:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.374 04:19:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.374 04:19:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.374 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.374 04:19:17 -- setup/common.sh@18 -- # local node=0 00:04:14.374 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.374 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.374 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.374 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.374 04:19:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.374 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.374 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8108048 kB' 'MemUsed: 4131072 kB' 'SwapCached: 0 kB' 'Active: 456652 kB' 'Inactive: 1260200 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260200 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598516 kB' 'Mapped: 51052 kB' 'AnonPages: 119968 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62056 kB' 'Slab: 156252 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.375 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.375 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.375 04:19:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.375 04:19:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.375 04:19:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.375 node0=1025 expecting 1025 00:04:14.375 04:19:17 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:14.375 04:19:17 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:14.375 00:04:14.375 real 0m0.562s 00:04:14.375 user 0m0.273s 00:04:14.375 sys 0m0.294s 00:04:14.375 04:19:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.375 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.375 ************************************ 00:04:14.375 END TEST odd_alloc 00:04:14.375 ************************************ 00:04:14.375 04:19:17 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:14.375 04:19:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.375 04:19:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.375 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.375 ************************************ 00:04:14.375 START TEST custom_alloc 00:04:14.375 ************************************ 00:04:14.375 04:19:17 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:14.375 04:19:17 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:14.375 04:19:17 -- setup/hugepages.sh@169 -- # local node 00:04:14.375 04:19:17 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:14.375 04:19:17 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:14.375 04:19:17 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:14.375 04:19:17 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:14.375 04:19:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.375 04:19:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.375 04:19:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.375 04:19:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.375 04:19:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.375 04:19:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.375 04:19:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.375 04:19:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.375 04:19:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.375 04:19:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.375 04:19:17 -- setup/hugepages.sh@83 -- # : 0 00:04:14.375 04:19:17 -- setup/hugepages.sh@84 -- # : 0 00:04:14.375 04:19:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:14.375 04:19:17 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:14.375 04:19:17 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.375 04:19:17 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:14.375 04:19:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.375 04:19:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.375 04:19:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.375 04:19:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.375 04:19:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.375 04:19:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.375 04:19:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:14.375 04:19:17 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.375 04:19:17 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:14.375 04:19:17 -- setup/hugepages.sh@78 -- # return 0 00:04:14.375 04:19:17 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:14.375 04:19:17 -- setup/hugepages.sh@187 -- # setup output 00:04:14.375 04:19:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.375 04:19:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.670 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.670 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.670 04:19:17 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:14.670 04:19:17 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:14.670 04:19:17 -- setup/hugepages.sh@89 -- # local node 00:04:14.670 04:19:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.670 04:19:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.670 04:19:17 -- setup/hugepages.sh@92 -- # local surp 00:04:14.670 04:19:17 -- setup/hugepages.sh@93 -- # local resv 00:04:14.670 04:19:17 -- setup/hugepages.sh@94 -- # local anon 00:04:14.670 04:19:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.670 04:19:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.670 04:19:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.670 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.670 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.670 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.670 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.670 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.670 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.670 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.670 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9157624 kB' 'MemAvailable: 10539740 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456684 kB' 'Inactive: 1260204 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 50928 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156220 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94164 kB' 'KernelStack: 6408 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.670 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.670 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.670 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.670 04:19:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:14.670 04:19:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.670 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.670 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.670 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.670 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.670 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.670 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.670 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.670 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.670 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.670 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9157876 kB' 'MemAvailable: 10539992 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456780 kB' 'Inactive: 1260204 kB' 'Active(anon): 128948 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 50928 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156216 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94160 kB' 'KernelStack: 6360 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.671 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.671 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.962 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.962 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.962 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.962 04:19:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:14.962 04:19:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.962 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.962 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.962 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.962 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.962 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.962 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.962 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.962 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.962 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.962 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9158224 kB' 'MemAvailable: 10540340 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456500 kB' 'Inactive: 1260204 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50928 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156216 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94160 kB' 'KernelStack: 6344 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.963 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.963 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.964 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.964 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.964 04:19:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:14.964 nr_hugepages=512 00:04:14.964 04:19:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:14.964 resv_hugepages=0 00:04:14.964 04:19:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.964 surplus_hugepages=0 00:04:14.964 04:19:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.964 anon_hugepages=0 00:04:14.964 04:19:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.964 04:19:17 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:14.964 04:19:17 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:14.964 04:19:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.964 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.964 04:19:17 -- setup/common.sh@18 -- # local node= 00:04:14.964 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.964 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.964 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.964 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.964 04:19:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.964 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.964 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.964 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9158224 kB' 'MemAvailable: 10540340 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456680 kB' 'Inactive: 1260204 kB' 'Active(anon): 128848 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119900 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156216 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94160 kB' 'KernelStack: 6352 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 332556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.964 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.964 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.965 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.965 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.966 04:19:17 -- setup/common.sh@33 -- # echo 512 00:04:14.966 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.966 04:19:17 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:14.966 04:19:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.966 04:19:17 -- setup/hugepages.sh@27 -- # local node 00:04:14.966 04:19:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.966 04:19:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.966 04:19:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.966 04:19:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.966 04:19:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.966 04:19:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.966 04:19:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.966 04:19:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.966 04:19:17 -- setup/common.sh@18 -- # local node=0 00:04:14.966 04:19:17 -- setup/common.sh@19 -- # local var val 00:04:14.966 04:19:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.966 04:19:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.966 04:19:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.966 04:19:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.966 04:19:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.966 04:19:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9158224 kB' 'MemUsed: 3080896 kB' 'SwapCached: 0 kB' 'Active: 456500 kB' 'Inactive: 1260204 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598520 kB' 'Mapped: 50792 kB' 'AnonPages: 119724 kB' 'Shmem: 10484 kB' 'KernelStack: 6404 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62056 kB' 'Slab: 156216 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.966 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.966 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # continue 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.967 04:19:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.967 04:19:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.967 04:19:17 -- setup/common.sh@33 -- # echo 0 00:04:14.967 04:19:17 -- setup/common.sh@33 -- # return 0 00:04:14.967 04:19:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.967 04:19:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.967 04:19:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.967 04:19:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.967 node0=512 expecting 512 00:04:14.967 04:19:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.967 04:19:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:14.967 00:04:14.967 real 0m0.532s 00:04:14.967 user 0m0.280s 00:04:14.967 sys 0m0.288s 00:04:14.967 04:19:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.967 04:19:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.967 ************************************ 00:04:14.967 END TEST custom_alloc 00:04:14.967 ************************************ 00:04:14.967 04:19:18 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:14.967 04:19:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.967 04:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.967 04:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:14.967 ************************************ 00:04:14.967 START TEST no_shrink_alloc 00:04:14.967 ************************************ 00:04:14.967 04:19:18 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:14.967 04:19:18 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:14.967 04:19:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.967 04:19:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.968 04:19:18 -- setup/hugepages.sh@51 -- # shift 00:04:14.968 04:19:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.968 04:19:18 -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.968 04:19:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.968 04:19:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.968 04:19:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.968 04:19:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.968 04:19:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.968 04:19:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.968 04:19:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.968 04:19:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.968 04:19:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.968 04:19:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.968 04:19:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.968 04:19:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.968 04:19:18 -- setup/hugepages.sh@73 -- # return 0 00:04:14.968 04:19:18 -- setup/hugepages.sh@198 -- # setup output 00:04:14.968 04:19:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.968 04:19:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.227 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.227 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.227 04:19:18 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:15.227 04:19:18 -- setup/hugepages.sh@89 -- # local node 00:04:15.227 04:19:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.227 04:19:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.227 04:19:18 -- setup/hugepages.sh@92 -- # local surp 00:04:15.227 04:19:18 -- setup/hugepages.sh@93 -- # local resv 00:04:15.227 04:19:18 -- setup/hugepages.sh@94 -- # local anon 00:04:15.227 04:19:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.227 04:19:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.227 04:19:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.227 04:19:18 -- setup/common.sh@18 -- # local node= 00:04:15.227 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:15.227 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.227 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.227 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.227 04:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.227 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.227 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8107688 kB' 'MemAvailable: 9489804 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456832 kB' 'Inactive: 1260204 kB' 'Active(anon): 129000 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120120 kB' 'Mapped: 51260 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156244 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94188 kB' 'KernelStack: 6488 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.227 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.227 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.228 04:19:18 -- setup/common.sh@33 -- # echo 0 00:04:15.228 04:19:18 -- setup/common.sh@33 -- # return 0 00:04:15.228 04:19:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:15.228 04:19:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.228 04:19:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.228 04:19:18 -- setup/common.sh@18 -- # local node= 00:04:15.228 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:15.228 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.228 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.228 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.228 04:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.228 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.228 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8107932 kB' 'MemAvailable: 9490048 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456536 kB' 'Inactive: 1260204 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119816 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156236 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94180 kB' 'KernelStack: 6416 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.228 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.228 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.489 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.489 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.490 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.490 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.491 04:19:18 -- setup/common.sh@33 -- # echo 0 00:04:15.491 04:19:18 -- setup/common.sh@33 -- # return 0 00:04:15.491 04:19:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:15.491 04:19:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.491 04:19:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.491 04:19:18 -- setup/common.sh@18 -- # local node= 00:04:15.491 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:15.491 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.491 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.491 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.491 04:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.491 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.491 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8107932 kB' 'MemAvailable: 9490048 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456344 kB' 'Inactive: 1260204 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119628 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156236 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94180 kB' 'KernelStack: 6384 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.491 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.491 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.492 04:19:18 -- setup/common.sh@33 -- # echo 0 00:04:15.492 04:19:18 -- setup/common.sh@33 -- # return 0 00:04:15.492 04:19:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:15.492 nr_hugepages=1024 00:04:15.492 04:19:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.492 resv_hugepages=0 00:04:15.492 04:19:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.492 surplus_hugepages=0 00:04:15.492 04:19:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.492 anon_hugepages=0 00:04:15.492 04:19:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.492 04:19:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.492 04:19:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.492 04:19:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.492 04:19:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.492 04:19:18 -- setup/common.sh@18 -- # local node= 00:04:15.492 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:15.492 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.492 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.492 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.492 04:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.492 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.492 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8107932 kB' 'MemAvailable: 9490048 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 456400 kB' 'Inactive: 1260204 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119680 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62056 kB' 'Slab: 156236 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94180 kB' 'KernelStack: 6400 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 332756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.492 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.492 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.493 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.493 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.494 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.494 04:19:18 -- setup/common.sh@33 -- # echo 1024 00:04:15.494 04:19:18 -- setup/common.sh@33 -- # return 0 00:04:15.494 04:19:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.494 04:19:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.494 04:19:18 -- setup/hugepages.sh@27 -- # local node 00:04:15.494 04:19:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.494 04:19:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.494 04:19:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:15.494 04:19:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.494 04:19:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.494 04:19:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.494 04:19:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.494 04:19:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.494 04:19:18 -- setup/common.sh@18 -- # local node=0 00:04:15.494 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:15.494 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.494 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.494 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.494 04:19:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.494 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.494 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.494 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.494 04:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8107932 kB' 'MemUsed: 4131188 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 1260204 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598520 kB' 'Mapped: 50792 kB' 'AnonPages: 119892 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62056 kB' 'Slab: 156268 kB' 'SReclaimable: 62056 kB' 'SUnreclaim: 94212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.495 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.495 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.496 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.496 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.496 04:19:18 -- setup/common.sh@33 -- # echo 0 00:04:15.496 04:19:18 -- setup/common.sh@33 -- # return 0 00:04:15.496 04:19:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.496 04:19:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.496 04:19:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.496 04:19:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.496 node0=1024 expecting 1024 00:04:15.496 04:19:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.496 04:19:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.496 04:19:18 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:15.496 04:19:18 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:15.496 04:19:18 -- setup/hugepages.sh@202 -- # setup output 00:04:15.496 04:19:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.496 04:19:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.757 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.757 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:15.757 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:15.757 04:19:18 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:15.757 04:19:18 -- setup/hugepages.sh@89 -- # local node 00:04:15.757 04:19:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.757 04:19:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.757 04:19:18 -- setup/hugepages.sh@92 -- # local surp 00:04:15.757 04:19:18 -- setup/hugepages.sh@93 -- # local resv 00:04:15.757 04:19:18 -- setup/hugepages.sh@94 -- # local anon 00:04:15.757 04:19:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.757 04:19:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.757 04:19:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.757 04:19:18 -- setup/common.sh@18 -- # local node= 00:04:15.757 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:15.757 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.757 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.757 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.757 04:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.757 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.757 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8106364 kB' 'MemAvailable: 9488464 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 453884 kB' 'Inactive: 1260204 kB' 'Active(anon): 126052 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117220 kB' 'Mapped: 50080 kB' 'Shmem: 10484 kB' 'KReclaimable: 62020 kB' 'Slab: 156020 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 94000 kB' 'KernelStack: 6280 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 314332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.757 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.757 04:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.758 04:19:18 -- setup/common.sh@33 -- # echo 0 00:04:15.758 04:19:18 -- setup/common.sh@33 -- # return 0 00:04:15.758 04:19:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:15.758 04:19:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.758 04:19:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.758 04:19:18 -- setup/common.sh@18 -- # local node= 00:04:15.758 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:15.758 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.758 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.758 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.758 04:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.758 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.758 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8106364 kB' 'MemAvailable: 9488464 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 453740 kB' 'Inactive: 1260204 kB' 'Active(anon): 125908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117032 kB' 'Mapped: 49944 kB' 'Shmem: 10484 kB' 'KReclaimable: 62020 kB' 'Slab: 155980 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 93960 kB' 'KernelStack: 6304 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 314332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.758 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.758 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.759 04:19:18 -- setup/common.sh@32 -- # continue 00:04:15.759 04:19:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.021 04:19:18 -- setup/common.sh@33 -- # echo 0 00:04:16.021 04:19:18 -- setup/common.sh@33 -- # return 0 00:04:16.021 04:19:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:16.021 04:19:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.021 04:19:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.021 04:19:18 -- setup/common.sh@18 -- # local node= 00:04:16.021 04:19:18 -- setup/common.sh@19 -- # local var val 00:04:16.021 04:19:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.021 04:19:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.021 04:19:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.021 04:19:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.021 04:19:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.021 04:19:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8106364 kB' 'MemAvailable: 9488464 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 453736 kB' 'Inactive: 1260204 kB' 'Active(anon): 125904 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117036 kB' 'Mapped: 49944 kB' 'Shmem: 10484 kB' 'KReclaimable: 62020 kB' 'Slab: 155980 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 93960 kB' 'KernelStack: 6304 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 314332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.021 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.021 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.022 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.022 04:19:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.023 04:19:19 -- setup/common.sh@33 -- # echo 0 00:04:16.023 04:19:19 -- setup/common.sh@33 -- # return 0 00:04:16.023 04:19:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:16.023 nr_hugepages=1024 00:04:16.023 04:19:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.023 resv_hugepages=0 00:04:16.023 04:19:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.023 surplus_hugepages=0 00:04:16.023 04:19:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.023 anon_hugepages=0 00:04:16.023 04:19:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.023 04:19:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.023 04:19:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.023 04:19:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.023 04:19:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.023 04:19:19 -- setup/common.sh@18 -- # local node= 00:04:16.023 04:19:19 -- setup/common.sh@19 -- # local var val 00:04:16.023 04:19:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.023 04:19:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.023 04:19:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.023 04:19:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.023 04:19:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.023 04:19:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8114176 kB' 'MemAvailable: 9496276 kB' 'Buffers: 2684 kB' 'Cached: 1595836 kB' 'SwapCached: 0 kB' 'Active: 453740 kB' 'Inactive: 1260204 kB' 'Active(anon): 125908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117036 kB' 'Mapped: 49944 kB' 'Shmem: 10484 kB' 'KReclaimable: 62020 kB' 'Slab: 155980 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 93960 kB' 'KernelStack: 6304 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 314340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 4030464 kB' 'DirectMap1G: 10485760 kB' 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.023 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.023 04:19:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.024 04:19:19 -- setup/common.sh@33 -- # echo 1024 00:04:16.024 04:19:19 -- setup/common.sh@33 -- # return 0 00:04:16.024 04:19:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.024 04:19:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.024 04:19:19 -- setup/hugepages.sh@27 -- # local node 00:04:16.024 04:19:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.024 04:19:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.024 04:19:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.024 04:19:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.024 04:19:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.024 04:19:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.024 04:19:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.024 04:19:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.024 04:19:19 -- setup/common.sh@18 -- # local node=0 00:04:16.024 04:19:19 -- setup/common.sh@19 -- # local var val 00:04:16.024 04:19:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.024 04:19:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.024 04:19:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.024 04:19:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.024 04:19:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.024 04:19:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8114176 kB' 'MemUsed: 4124944 kB' 'SwapCached: 0 kB' 'Active: 453700 kB' 'Inactive: 1260204 kB' 'Active(anon): 125868 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1260204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598520 kB' 'Mapped: 49944 kB' 'AnonPages: 117036 kB' 'Shmem: 10484 kB' 'KernelStack: 6304 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62020 kB' 'Slab: 155980 kB' 'SReclaimable: 62020 kB' 'SUnreclaim: 93960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.024 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.024 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # continue 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.025 04:19:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.025 04:19:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.025 04:19:19 -- setup/common.sh@33 -- # echo 0 00:04:16.025 04:19:19 -- setup/common.sh@33 -- # return 0 00:04:16.025 04:19:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.025 04:19:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.025 04:19:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.025 04:19:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.025 04:19:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.025 node0=1024 expecting 1024 00:04:16.025 04:19:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.025 00:04:16.025 real 0m1.050s 00:04:16.025 user 0m0.536s 00:04:16.025 sys 0m0.584s 00:04:16.025 04:19:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.025 04:19:19 -- common/autotest_common.sh@10 -- # set +x 00:04:16.025 ************************************ 00:04:16.025 END TEST no_shrink_alloc 00:04:16.025 ************************************ 00:04:16.025 04:19:19 -- setup/hugepages.sh@217 -- # clear_hp 00:04:16.025 04:19:19 -- setup/hugepages.sh@37 -- # local node hp 00:04:16.025 04:19:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:16.025 04:19:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.025 04:19:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:16.025 04:19:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:16.025 04:19:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:16.025 04:19:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:16.025 04:19:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:16.025 00:04:16.025 real 0m4.873s 00:04:16.026 user 0m2.396s 00:04:16.026 sys 0m2.476s 00:04:16.026 04:19:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.026 04:19:19 -- common/autotest_common.sh@10 -- # set +x 00:04:16.026 ************************************ 00:04:16.026 END TEST hugepages 00:04:16.026 ************************************ 00:04:16.026 04:19:19 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:16.026 04:19:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.026 04:19:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.026 04:19:19 -- common/autotest_common.sh@10 -- # set +x 00:04:16.026 ************************************ 00:04:16.026 START TEST driver 00:04:16.026 ************************************ 00:04:16.026 04:19:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:16.284 * Looking for test storage... 00:04:16.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.284 04:19:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:16.284 04:19:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:16.285 04:19:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:16.285 04:19:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:16.285 04:19:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:16.285 04:19:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:16.285 04:19:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:16.285 04:19:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:16.285 04:19:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:16.285 04:19:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.285 04:19:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:16.285 04:19:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:16.285 04:19:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:16.285 04:19:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:16.285 04:19:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:16.285 04:19:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:16.285 04:19:19 -- scripts/common.sh@344 -- # : 1 00:04:16.285 04:19:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:16.285 04:19:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.285 04:19:19 -- scripts/common.sh@364 -- # decimal 1 00:04:16.285 04:19:19 -- scripts/common.sh@352 -- # local d=1 00:04:16.285 04:19:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.285 04:19:19 -- scripts/common.sh@354 -- # echo 1 00:04:16.285 04:19:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:16.285 04:19:19 -- scripts/common.sh@365 -- # decimal 2 00:04:16.285 04:19:19 -- scripts/common.sh@352 -- # local d=2 00:04:16.285 04:19:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.285 04:19:19 -- scripts/common.sh@354 -- # echo 2 00:04:16.285 04:19:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:16.285 04:19:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:16.285 04:19:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:16.285 04:19:19 -- scripts/common.sh@367 -- # return 0 00:04:16.285 04:19:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.285 04:19:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.285 --rc genhtml_branch_coverage=1 00:04:16.285 --rc genhtml_function_coverage=1 00:04:16.285 --rc genhtml_legend=1 00:04:16.285 --rc geninfo_all_blocks=1 00:04:16.285 --rc geninfo_unexecuted_blocks=1 00:04:16.285 00:04:16.285 ' 00:04:16.285 04:19:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.285 --rc genhtml_branch_coverage=1 00:04:16.285 --rc genhtml_function_coverage=1 00:04:16.285 --rc genhtml_legend=1 00:04:16.285 --rc geninfo_all_blocks=1 00:04:16.285 --rc geninfo_unexecuted_blocks=1 00:04:16.285 00:04:16.285 ' 00:04:16.285 04:19:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.285 --rc genhtml_branch_coverage=1 00:04:16.285 --rc genhtml_function_coverage=1 00:04:16.285 --rc genhtml_legend=1 00:04:16.285 --rc geninfo_all_blocks=1 00:04:16.285 --rc geninfo_unexecuted_blocks=1 00:04:16.285 00:04:16.285 ' 00:04:16.285 04:19:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:16.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.285 --rc genhtml_branch_coverage=1 00:04:16.285 --rc genhtml_function_coverage=1 00:04:16.285 --rc genhtml_legend=1 00:04:16.285 --rc geninfo_all_blocks=1 00:04:16.285 --rc geninfo_unexecuted_blocks=1 00:04:16.285 00:04:16.285 ' 00:04:16.285 04:19:19 -- setup/driver.sh@68 -- # setup reset 00:04:16.285 04:19:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.285 04:19:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.854 04:19:19 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:16.854 04:19:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.854 04:19:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.854 04:19:19 -- common/autotest_common.sh@10 -- # set +x 00:04:16.854 ************************************ 00:04:16.854 START TEST guess_driver 00:04:16.854 ************************************ 00:04:16.854 04:19:19 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:16.854 04:19:19 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:16.854 04:19:19 -- setup/driver.sh@47 -- # local fail=0 00:04:16.854 04:19:19 -- setup/driver.sh@49 -- # pick_driver 00:04:16.854 04:19:19 -- setup/driver.sh@36 -- # vfio 00:04:16.854 04:19:19 -- setup/driver.sh@21 -- # local iommu_grups 00:04:16.854 04:19:19 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:16.854 04:19:19 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:16.854 04:19:19 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:16.854 04:19:19 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:16.854 04:19:19 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:16.854 04:19:19 -- setup/driver.sh@32 -- # return 1 00:04:16.854 04:19:19 -- setup/driver.sh@38 -- # uio 00:04:16.854 04:19:19 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:16.854 04:19:19 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:16.854 04:19:19 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:16.854 04:19:19 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:16.854 04:19:19 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:16.854 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:16.854 04:19:19 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:16.854 04:19:19 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:16.854 04:19:19 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:16.854 04:19:19 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:16.854 Looking for driver=uio_pci_generic 00:04:16.854 04:19:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.854 04:19:19 -- setup/driver.sh@45 -- # setup output config 00:04:16.854 04:19:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.854 04:19:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.422 04:19:20 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:17.422 04:19:20 -- setup/driver.sh@58 -- # continue 00:04:17.422 04:19:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.680 04:19:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.680 04:19:20 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:17.680 04:19:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.680 04:19:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.680 04:19:20 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:17.680 04:19:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.680 04:19:20 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.680 04:19:20 -- setup/driver.sh@65 -- # setup reset 00:04:17.680 04:19:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.680 04:19:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.247 ************************************ 00:04:18.247 END TEST guess_driver 00:04:18.247 ************************************ 00:04:18.247 00:04:18.247 real 0m1.472s 00:04:18.247 user 0m0.567s 00:04:18.247 sys 0m0.852s 00:04:18.247 04:19:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.247 04:19:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.247 ************************************ 00:04:18.247 END TEST driver 00:04:18.247 ************************************ 00:04:18.247 00:04:18.247 real 0m2.255s 00:04:18.247 user 0m0.898s 00:04:18.247 sys 0m1.375s 00:04:18.247 04:19:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.247 04:19:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.508 04:19:21 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:18.508 04:19:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.508 04:19:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.508 04:19:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.508 ************************************ 00:04:18.508 START TEST devices 00:04:18.508 ************************************ 00:04:18.508 04:19:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:18.508 * Looking for test storage... 00:04:18.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.508 04:19:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:18.508 04:19:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:18.508 04:19:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:18.508 04:19:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:18.508 04:19:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:18.508 04:19:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:18.508 04:19:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:18.508 04:19:21 -- scripts/common.sh@335 -- # IFS=.-: 00:04:18.508 04:19:21 -- scripts/common.sh@335 -- # read -ra ver1 00:04:18.508 04:19:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.508 04:19:21 -- scripts/common.sh@336 -- # read -ra ver2 00:04:18.508 04:19:21 -- scripts/common.sh@337 -- # local 'op=<' 00:04:18.508 04:19:21 -- scripts/common.sh@339 -- # ver1_l=2 00:04:18.508 04:19:21 -- scripts/common.sh@340 -- # ver2_l=1 00:04:18.508 04:19:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:18.508 04:19:21 -- scripts/common.sh@343 -- # case "$op" in 00:04:18.508 04:19:21 -- scripts/common.sh@344 -- # : 1 00:04:18.508 04:19:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:18.508 04:19:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.508 04:19:21 -- scripts/common.sh@364 -- # decimal 1 00:04:18.508 04:19:21 -- scripts/common.sh@352 -- # local d=1 00:04:18.508 04:19:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.508 04:19:21 -- scripts/common.sh@354 -- # echo 1 00:04:18.508 04:19:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:18.508 04:19:21 -- scripts/common.sh@365 -- # decimal 2 00:04:18.508 04:19:21 -- scripts/common.sh@352 -- # local d=2 00:04:18.508 04:19:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.508 04:19:21 -- scripts/common.sh@354 -- # echo 2 00:04:18.508 04:19:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:18.508 04:19:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:18.508 04:19:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:18.508 04:19:21 -- scripts/common.sh@367 -- # return 0 00:04:18.508 04:19:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.508 04:19:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:18.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.508 --rc genhtml_branch_coverage=1 00:04:18.508 --rc genhtml_function_coverage=1 00:04:18.508 --rc genhtml_legend=1 00:04:18.508 --rc geninfo_all_blocks=1 00:04:18.508 --rc geninfo_unexecuted_blocks=1 00:04:18.508 00:04:18.508 ' 00:04:18.508 04:19:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:18.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.508 --rc genhtml_branch_coverage=1 00:04:18.508 --rc genhtml_function_coverage=1 00:04:18.508 --rc genhtml_legend=1 00:04:18.508 --rc geninfo_all_blocks=1 00:04:18.508 --rc geninfo_unexecuted_blocks=1 00:04:18.508 00:04:18.508 ' 00:04:18.508 04:19:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:18.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.508 --rc genhtml_branch_coverage=1 00:04:18.508 --rc genhtml_function_coverage=1 00:04:18.508 --rc genhtml_legend=1 00:04:18.508 --rc geninfo_all_blocks=1 00:04:18.508 --rc geninfo_unexecuted_blocks=1 00:04:18.508 00:04:18.508 ' 00:04:18.508 04:19:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:18.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.508 --rc genhtml_branch_coverage=1 00:04:18.508 --rc genhtml_function_coverage=1 00:04:18.508 --rc genhtml_legend=1 00:04:18.508 --rc geninfo_all_blocks=1 00:04:18.508 --rc geninfo_unexecuted_blocks=1 00:04:18.508 00:04:18.508 ' 00:04:18.508 04:19:21 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.508 04:19:21 -- setup/devices.sh@192 -- # setup reset 00:04:18.508 04:19:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.508 04:19:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.442 04:19:22 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.442 04:19:22 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:19.442 04:19:22 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:19.442 04:19:22 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:19.442 04:19:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:19.442 04:19:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:19.442 04:19:22 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:19.442 04:19:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.442 04:19:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:19.442 04:19:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:19.442 04:19:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:19.442 04:19:22 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:19.442 04:19:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:19.442 04:19:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:19.442 04:19:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:19.442 04:19:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:19.442 04:19:22 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:19.442 04:19:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:19.442 04:19:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:19.442 04:19:22 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:19.442 04:19:22 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:19.442 04:19:22 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:19.442 04:19:22 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:19.442 04:19:22 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:19.442 04:19:22 -- setup/devices.sh@196 -- # blocks=() 00:04:19.442 04:19:22 -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.442 04:19:22 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.442 04:19:22 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.442 04:19:22 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.442 04:19:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.442 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.442 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.442 04:19:22 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:19.442 04:19:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:19.442 04:19:22 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.442 04:19:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:19.443 04:19:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.443 No valid GPT data, bailing 00:04:19.443 04:19:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.443 04:19:22 -- scripts/common.sh@393 -- # pt= 00:04:19.443 04:19:22 -- scripts/common.sh@394 -- # return 1 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.443 04:19:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.443 04:19:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.443 04:19:22 -- setup/common.sh@80 -- # echo 5368709120 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:19.443 04:19:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.443 04:19:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:19.443 04:19:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.443 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:19.443 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:19.443 04:19:22 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:19.443 04:19:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:19.443 04:19:22 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:19.443 04:19:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:19.443 No valid GPT data, bailing 00:04:19.443 04:19:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:19.443 04:19:22 -- scripts/common.sh@393 -- # pt= 00:04:19.443 04:19:22 -- scripts/common.sh@394 -- # return 1 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:19.443 04:19:22 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:19.443 04:19:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:19.443 04:19:22 -- setup/common.sh@80 -- # echo 4294967296 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:19.443 04:19:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.443 04:19:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:19.443 04:19:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.443 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:19.443 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:19.443 04:19:22 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:19.443 04:19:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:19.443 04:19:22 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:19.443 04:19:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:19.443 No valid GPT data, bailing 00:04:19.443 04:19:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:19.443 04:19:22 -- scripts/common.sh@393 -- # pt= 00:04:19.443 04:19:22 -- scripts/common.sh@394 -- # return 1 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:19.443 04:19:22 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:19.443 04:19:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:19.443 04:19:22 -- setup/common.sh@80 -- # echo 4294967296 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:19.443 04:19:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.443 04:19:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:19.443 04:19:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.443 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:19.443 04:19:22 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:19.443 04:19:22 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:19.443 04:19:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:19.443 04:19:22 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:19.443 04:19:22 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:19.443 04:19:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:19.702 No valid GPT data, bailing 00:04:19.702 04:19:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:19.702 04:19:22 -- scripts/common.sh@393 -- # pt= 00:04:19.702 04:19:22 -- scripts/common.sh@394 -- # return 1 00:04:19.702 04:19:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:19.702 04:19:22 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:19.702 04:19:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:19.702 04:19:22 -- setup/common.sh@80 -- # echo 4294967296 00:04:19.702 04:19:22 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:19.702 04:19:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.702 04:19:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:19.702 04:19:22 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:19.702 04:19:22 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.702 04:19:22 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.702 04:19:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.702 04:19:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.702 04:19:22 -- common/autotest_common.sh@10 -- # set +x 00:04:19.702 ************************************ 00:04:19.702 START TEST nvme_mount 00:04:19.702 ************************************ 00:04:19.702 04:19:22 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:19.702 04:19:22 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.702 04:19:22 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.702 04:19:22 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.702 04:19:22 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.702 04:19:22 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.702 04:19:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.702 04:19:22 -- setup/common.sh@40 -- # local part_no=1 00:04:19.702 04:19:22 -- setup/common.sh@41 -- # local size=1073741824 00:04:19.702 04:19:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.702 04:19:22 -- setup/common.sh@44 -- # parts=() 00:04:19.702 04:19:22 -- setup/common.sh@44 -- # local parts 00:04:19.702 04:19:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.702 04:19:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.702 04:19:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.702 04:19:22 -- setup/common.sh@46 -- # (( part++ )) 00:04:19.702 04:19:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.702 04:19:22 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:19.702 04:19:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.702 04:19:22 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:20.639 Creating new GPT entries in memory. 00:04:20.639 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.639 other utilities. 00:04:20.639 04:19:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.639 04:19:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.639 04:19:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.639 04:19:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.639 04:19:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:21.575 Creating new GPT entries in memory. 00:04:21.575 The operation has completed successfully. 00:04:21.575 04:19:24 -- setup/common.sh@57 -- # (( part++ )) 00:04:21.575 04:19:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.575 04:19:24 -- setup/common.sh@62 -- # wait 52081 00:04:21.835 04:19:24 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.835 04:19:24 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:21.835 04:19:24 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.835 04:19:24 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:21.835 04:19:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:21.835 04:19:24 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.835 04:19:24 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.835 04:19:24 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:21.835 04:19:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:21.835 04:19:24 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.835 04:19:24 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.835 04:19:24 -- setup/devices.sh@53 -- # local found=0 00:04:21.835 04:19:24 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.835 04:19:24 -- setup/devices.sh@56 -- # : 00:04:21.835 04:19:24 -- setup/devices.sh@59 -- # local pci status 00:04:21.835 04:19:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.835 04:19:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:21.835 04:19:24 -- setup/devices.sh@47 -- # setup output config 00:04:21.835 04:19:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.835 04:19:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.835 04:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.835 04:19:25 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.835 04:19:25 -- setup/devices.sh@63 -- # found=1 00:04:21.835 04:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.835 04:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.835 04:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.400 04:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.400 04:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.400 04:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.400 04:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.400 04:19:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.400 04:19:25 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:22.400 04:19:25 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.400 04:19:25 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.400 04:19:25 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.400 04:19:25 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:22.400 04:19:25 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.400 04:19:25 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.400 04:19:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.400 04:19:25 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.400 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.401 04:19:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.401 04:19:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.659 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.659 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.659 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.659 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.659 04:19:25 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:22.659 04:19:25 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:22.659 04:19:25 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.659 04:19:25 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:22.659 04:19:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:22.659 04:19:25 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.659 04:19:25 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.659 04:19:25 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:22.659 04:19:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:22.659 04:19:25 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.659 04:19:25 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.659 04:19:25 -- setup/devices.sh@53 -- # local found=0 00:04:22.659 04:19:25 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.659 04:19:25 -- setup/devices.sh@56 -- # : 00:04:22.659 04:19:25 -- setup/devices.sh@59 -- # local pci status 00:04:22.659 04:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.659 04:19:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:22.659 04:19:25 -- setup/devices.sh@47 -- # setup output config 00:04:22.659 04:19:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.659 04:19:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.917 04:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.917 04:19:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:22.917 04:19:26 -- setup/devices.sh@63 -- # found=1 00:04:22.917 04:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.917 04:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:22.917 04:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.177 04:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.177 04:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.436 04:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.436 04:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.436 04:19:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.436 04:19:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:23.436 04:19:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.436 04:19:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.436 04:19:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.436 04:19:26 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.436 04:19:26 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:23.436 04:19:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:23.436 04:19:26 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:23.436 04:19:26 -- setup/devices.sh@50 -- # local mount_point= 00:04:23.436 04:19:26 -- setup/devices.sh@51 -- # local test_file= 00:04:23.436 04:19:26 -- setup/devices.sh@53 -- # local found=0 00:04:23.436 04:19:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.436 04:19:26 -- setup/devices.sh@59 -- # local pci status 00:04:23.436 04:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.436 04:19:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:23.436 04:19:26 -- setup/devices.sh@47 -- # setup output config 00:04:23.436 04:19:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.436 04:19:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.695 04:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.695 04:19:26 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:23.695 04:19:26 -- setup/devices.sh@63 -- # found=1 00:04:23.695 04:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.695 04:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.695 04:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.954 04:19:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.954 04:19:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.954 04:19:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:23.954 04:19:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.213 04:19:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.213 04:19:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.213 04:19:27 -- setup/devices.sh@68 -- # return 0 00:04:24.213 04:19:27 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:24.213 04:19:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.213 04:19:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.213 04:19:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.213 04:19:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.213 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.213 00:04:24.213 real 0m4.534s 00:04:24.213 user 0m1.041s 00:04:24.213 sys 0m1.189s 00:04:24.213 04:19:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.213 04:19:27 -- common/autotest_common.sh@10 -- # set +x 00:04:24.213 ************************************ 00:04:24.213 END TEST nvme_mount 00:04:24.213 ************************************ 00:04:24.213 04:19:27 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:24.213 04:19:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.213 04:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.213 04:19:27 -- common/autotest_common.sh@10 -- # set +x 00:04:24.213 ************************************ 00:04:24.213 START TEST dm_mount 00:04:24.213 ************************************ 00:04:24.213 04:19:27 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:24.213 04:19:27 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:24.213 04:19:27 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:24.213 04:19:27 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:24.213 04:19:27 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:24.213 04:19:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.213 04:19:27 -- setup/common.sh@40 -- # local part_no=2 00:04:24.213 04:19:27 -- setup/common.sh@41 -- # local size=1073741824 00:04:24.213 04:19:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.213 04:19:27 -- setup/common.sh@44 -- # parts=() 00:04:24.213 04:19:27 -- setup/common.sh@44 -- # local parts 00:04:24.213 04:19:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.213 04:19:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.213 04:19:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.213 04:19:27 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.213 04:19:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.213 04:19:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.213 04:19:27 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.213 04:19:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.213 04:19:27 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:24.213 04:19:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.213 04:19:27 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:25.150 Creating new GPT entries in memory. 00:04:25.150 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.150 other utilities. 00:04:25.150 04:19:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.150 04:19:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.150 04:19:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.150 04:19:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.150 04:19:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:26.529 Creating new GPT entries in memory. 00:04:26.529 The operation has completed successfully. 00:04:26.529 04:19:29 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.529 04:19:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.529 04:19:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.529 04:19:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.529 04:19:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:27.466 The operation has completed successfully. 00:04:27.466 04:19:30 -- setup/common.sh@57 -- # (( part++ )) 00:04:27.466 04:19:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.466 04:19:30 -- setup/common.sh@62 -- # wait 52541 00:04:27.466 04:19:30 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.466 04:19:30 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.466 04:19:30 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.466 04:19:30 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.466 04:19:30 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.466 04:19:30 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.466 04:19:30 -- setup/devices.sh@161 -- # break 00:04:27.466 04:19:30 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.466 04:19:30 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.466 04:19:30 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:27.466 04:19:30 -- setup/devices.sh@166 -- # dm=dm-0 00:04:27.467 04:19:30 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:27.467 04:19:30 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:27.467 04:19:30 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.467 04:19:30 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:27.467 04:19:30 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.467 04:19:30 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.467 04:19:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.467 04:19:30 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.467 04:19:30 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.467 04:19:30 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:27.467 04:19:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:27.467 04:19:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.467 04:19:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.467 04:19:30 -- setup/devices.sh@53 -- # local found=0 00:04:27.467 04:19:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.467 04:19:30 -- setup/devices.sh@56 -- # : 00:04:27.467 04:19:30 -- setup/devices.sh@59 -- # local pci status 00:04:27.467 04:19:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.467 04:19:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:27.467 04:19:30 -- setup/devices.sh@47 -- # setup output config 00:04:27.467 04:19:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.467 04:19:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.467 04:19:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.467 04:19:30 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:27.467 04:19:30 -- setup/devices.sh@63 -- # found=1 00:04:27.467 04:19:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.730 04:19:30 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.730 04:19:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.997 04:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.997 04:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.997 04:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.997 04:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.997 04:19:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.997 04:19:31 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:27.997 04:19:31 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.997 04:19:31 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.997 04:19:31 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.997 04:19:31 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.997 04:19:31 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:27.997 04:19:31 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:27.997 04:19:31 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:27.997 04:19:31 -- setup/devices.sh@50 -- # local mount_point= 00:04:27.997 04:19:31 -- setup/devices.sh@51 -- # local test_file= 00:04:27.997 04:19:31 -- setup/devices.sh@53 -- # local found=0 00:04:27.997 04:19:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.997 04:19:31 -- setup/devices.sh@59 -- # local pci status 00:04:27.997 04:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.997 04:19:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:27.997 04:19:31 -- setup/devices.sh@47 -- # setup output config 00:04:27.997 04:19:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.997 04:19:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.257 04:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.257 04:19:31 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:28.257 04:19:31 -- setup/devices.sh@63 -- # found=1 00:04:28.257 04:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.257 04:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.257 04:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.516 04:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.516 04:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.775 04:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:28.775 04:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.775 04:19:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.775 04:19:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.775 04:19:31 -- setup/devices.sh@68 -- # return 0 00:04:28.775 04:19:31 -- setup/devices.sh@187 -- # cleanup_dm 00:04:28.775 04:19:31 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.775 04:19:31 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.775 04:19:31 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:28.775 04:19:31 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.775 04:19:31 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:28.775 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.775 04:19:31 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.775 04:19:31 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:28.775 00:04:28.775 real 0m4.563s 00:04:28.775 user 0m0.674s 00:04:28.775 sys 0m0.820s 00:04:28.775 ************************************ 00:04:28.775 END TEST dm_mount 00:04:28.775 ************************************ 00:04:28.775 04:19:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.775 04:19:31 -- common/autotest_common.sh@10 -- # set +x 00:04:28.775 04:19:31 -- setup/devices.sh@1 -- # cleanup 00:04:28.775 04:19:31 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:28.775 04:19:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.775 04:19:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.775 04:19:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.775 04:19:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.775 04:19:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.034 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.034 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.034 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.034 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.034 04:19:32 -- setup/devices.sh@12 -- # cleanup_dm 00:04:29.034 04:19:32 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.034 04:19:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.034 04:19:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.034 04:19:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.034 04:19:32 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.034 04:19:32 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:29.034 ************************************ 00:04:29.034 END TEST devices 00:04:29.034 ************************************ 00:04:29.034 00:04:29.034 real 0m10.731s 00:04:29.034 user 0m2.463s 00:04:29.034 sys 0m2.607s 00:04:29.034 04:19:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.035 04:19:32 -- common/autotest_common.sh@10 -- # set +x 00:04:29.035 00:04:29.035 real 0m22.580s 00:04:29.035 user 0m7.866s 00:04:29.035 sys 0m9.028s 00:04:29.035 04:19:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.035 04:19:32 -- common/autotest_common.sh@10 -- # set +x 00:04:29.035 ************************************ 00:04:29.035 END TEST setup.sh 00:04:29.035 ************************************ 00:04:29.294 04:19:32 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:29.294 Hugepages 00:04:29.294 node hugesize free / total 00:04:29.294 node0 1048576kB 0 / 0 00:04:29.294 node0 2048kB 2048 / 2048 00:04:29.294 00:04:29.294 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.294 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:29.554 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:29.554 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:29.554 04:19:32 -- spdk/autotest.sh@128 -- # uname -s 00:04:29.554 04:19:32 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:29.554 04:19:32 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:29.554 04:19:32 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.381 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.381 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.381 04:19:33 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:31.350 04:19:34 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:31.350 04:19:34 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:31.350 04:19:34 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.350 04:19:34 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:31.350 04:19:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:31.350 04:19:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:31.350 04:19:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.350 04:19:34 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:31.350 04:19:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:31.610 04:19:34 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:31.610 04:19:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:31.610 04:19:34 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.870 Waiting for block devices as requested 00:04:31.870 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:31.870 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.136 04:19:35 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:32.136 04:19:35 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:32.136 04:19:35 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:32.136 04:19:35 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:32.136 04:19:35 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:32.136 04:19:35 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1552 -- # continue 00:04:32.136 04:19:35 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:32.136 04:19:35 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:32.136 04:19:35 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:32.136 04:19:35 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:32.136 04:19:35 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:32.136 04:19:35 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:32.136 04:19:35 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:32.136 04:19:35 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:32.136 04:19:35 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:32.136 04:19:35 -- common/autotest_common.sh@1552 -- # continue 00:04:32.136 04:19:35 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:32.136 04:19:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.136 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:04:32.136 04:19:35 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:32.136 04:19:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.136 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:04:32.136 04:19:35 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.962 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.962 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.962 04:19:36 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:32.962 04:19:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.962 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:04:33.220 04:19:36 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:33.221 04:19:36 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:33.221 04:19:36 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:33.221 04:19:36 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:33.221 04:19:36 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:33.221 04:19:36 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:33.221 04:19:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:33.221 04:19:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:33.221 04:19:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.221 04:19:36 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:33.221 04:19:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:33.221 04:19:36 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:33.221 04:19:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:33.221 04:19:36 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:33.221 04:19:36 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:33.221 04:19:36 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:33.221 04:19:36 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:33.221 04:19:36 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:33.221 04:19:36 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:33.221 04:19:36 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:33.221 04:19:36 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:33.221 04:19:36 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:33.221 04:19:36 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:33.221 04:19:36 -- common/autotest_common.sh@1588 -- # return 0 00:04:33.221 04:19:36 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:33.221 04:19:36 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:33.221 04:19:36 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:33.221 04:19:36 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:33.221 04:19:36 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:33.221 04:19:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.221 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:04:33.221 04:19:36 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:33.221 04:19:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.221 04:19:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.221 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:04:33.221 ************************************ 00:04:33.221 START TEST env 00:04:33.221 ************************************ 00:04:33.221 04:19:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:33.221 * Looking for test storage... 00:04:33.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:33.221 04:19:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:33.221 04:19:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:33.221 04:19:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:33.480 04:19:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:33.480 04:19:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:33.480 04:19:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:33.480 04:19:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:33.480 04:19:36 -- scripts/common.sh@335 -- # IFS=.-: 00:04:33.480 04:19:36 -- scripts/common.sh@335 -- # read -ra ver1 00:04:33.480 04:19:36 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.480 04:19:36 -- scripts/common.sh@336 -- # read -ra ver2 00:04:33.480 04:19:36 -- scripts/common.sh@337 -- # local 'op=<' 00:04:33.480 04:19:36 -- scripts/common.sh@339 -- # ver1_l=2 00:04:33.480 04:19:36 -- scripts/common.sh@340 -- # ver2_l=1 00:04:33.480 04:19:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:33.480 04:19:36 -- scripts/common.sh@343 -- # case "$op" in 00:04:33.480 04:19:36 -- scripts/common.sh@344 -- # : 1 00:04:33.480 04:19:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:33.480 04:19:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.480 04:19:36 -- scripts/common.sh@364 -- # decimal 1 00:04:33.480 04:19:36 -- scripts/common.sh@352 -- # local d=1 00:04:33.480 04:19:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.480 04:19:36 -- scripts/common.sh@354 -- # echo 1 00:04:33.480 04:19:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:33.480 04:19:36 -- scripts/common.sh@365 -- # decimal 2 00:04:33.480 04:19:36 -- scripts/common.sh@352 -- # local d=2 00:04:33.480 04:19:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.480 04:19:36 -- scripts/common.sh@354 -- # echo 2 00:04:33.480 04:19:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:33.480 04:19:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:33.480 04:19:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:33.480 04:19:36 -- scripts/common.sh@367 -- # return 0 00:04:33.480 04:19:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.480 04:19:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.480 --rc genhtml_branch_coverage=1 00:04:33.480 --rc genhtml_function_coverage=1 00:04:33.480 --rc genhtml_legend=1 00:04:33.480 --rc geninfo_all_blocks=1 00:04:33.480 --rc geninfo_unexecuted_blocks=1 00:04:33.480 00:04:33.480 ' 00:04:33.480 04:19:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.480 --rc genhtml_branch_coverage=1 00:04:33.480 --rc genhtml_function_coverage=1 00:04:33.480 --rc genhtml_legend=1 00:04:33.480 --rc geninfo_all_blocks=1 00:04:33.480 --rc geninfo_unexecuted_blocks=1 00:04:33.480 00:04:33.480 ' 00:04:33.480 04:19:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.480 --rc genhtml_branch_coverage=1 00:04:33.480 --rc genhtml_function_coverage=1 00:04:33.480 --rc genhtml_legend=1 00:04:33.480 --rc geninfo_all_blocks=1 00:04:33.480 --rc geninfo_unexecuted_blocks=1 00:04:33.480 00:04:33.480 ' 00:04:33.480 04:19:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.480 --rc genhtml_branch_coverage=1 00:04:33.480 --rc genhtml_function_coverage=1 00:04:33.480 --rc genhtml_legend=1 00:04:33.480 --rc geninfo_all_blocks=1 00:04:33.480 --rc geninfo_unexecuted_blocks=1 00:04:33.480 00:04:33.480 ' 00:04:33.480 04:19:36 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:33.480 04:19:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.480 04:19:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.480 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:04:33.480 ************************************ 00:04:33.480 START TEST env_memory 00:04:33.480 ************************************ 00:04:33.480 04:19:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:33.480 00:04:33.480 00:04:33.480 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.480 http://cunit.sourceforge.net/ 00:04:33.480 00:04:33.480 00:04:33.480 Suite: memory 00:04:33.480 Test: alloc and free memory map ...[2024-12-07 04:19:36.557508] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:33.480 passed 00:04:33.480 Test: mem map translation ...[2024-12-07 04:19:36.588239] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:33.480 [2024-12-07 04:19:36.588277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:33.480 [2024-12-07 04:19:36.588332] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:33.480 [2024-12-07 04:19:36.588342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:33.480 passed 00:04:33.480 Test: mem map registration ...[2024-12-07 04:19:36.655619] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:33.480 [2024-12-07 04:19:36.655712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:33.480 passed 00:04:33.740 Test: mem map adjacent registrations ...passed 00:04:33.740 00:04:33.740 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.740 suites 1 1 n/a 0 0 00:04:33.740 tests 4 4 4 0 0 00:04:33.740 asserts 152 152 152 0 n/a 00:04:33.740 00:04:33.740 Elapsed time = 0.219 seconds 00:04:33.740 00:04:33.740 real 0m0.239s 00:04:33.740 user 0m0.220s 00:04:33.740 sys 0m0.014s 00:04:33.740 04:19:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.740 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:04:33.740 ************************************ 00:04:33.740 END TEST env_memory 00:04:33.740 ************************************ 00:04:33.740 04:19:36 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.740 04:19:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.740 04:19:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.740 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:04:33.740 ************************************ 00:04:33.740 START TEST env_vtophys 00:04:33.740 ************************************ 00:04:33.740 04:19:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.740 EAL: lib.eal log level changed from notice to debug 00:04:33.740 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 1 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 2 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 3 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 4 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 5 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 6 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 7 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 8 as core 0 on socket 0 00:04:33.740 EAL: Detected lcore 9 as core 0 on socket 0 00:04:33.740 EAL: Maximum logical cores by configuration: 128 00:04:33.740 EAL: Detected CPU lcores: 10 00:04:33.740 EAL: Detected NUMA nodes: 1 00:04:33.740 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:33.740 EAL: Detected shared linkage of DPDK 00:04:33.740 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.740 EAL: Selected IOVA mode 'PA' 00:04:33.740 EAL: Probing VFIO support... 00:04:33.740 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.740 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:33.740 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.740 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.740 EAL: Setting up physically contiguous memory... 00:04:33.740 EAL: Setting maximum number of open files to 524288 00:04:33.740 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.740 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.740 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.740 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.740 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.740 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.741 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.741 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.741 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.741 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.741 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.741 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.741 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.741 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.741 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.741 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.741 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.741 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.741 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.741 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.741 EAL: Hugepages will be freed exactly as allocated. 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: TSC frequency is ~2200000 KHz 00:04:33.741 EAL: Main lcore 0 is ready (tid=7f2c9ae66a00;cpuset=[0]) 00:04:33.741 EAL: Trying to obtain current memory policy. 00:04:33.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.741 EAL: Restoring previous memory policy: 0 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.741 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.741 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.741 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.741 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:33.741 00:04:33.741 00:04:33.741 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.741 http://cunit.sourceforge.net/ 00:04:33.741 00:04:33.741 00:04:33.741 Suite: components_suite 00:04:33.741 Test: vtophys_malloc_test ...passed 00:04:33.741 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.741 EAL: Restoring previous memory policy: 4 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.741 EAL: Trying to obtain current memory policy. 00:04:33.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.741 EAL: Restoring previous memory policy: 4 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.741 EAL: Trying to obtain current memory policy. 00:04:33.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.741 EAL: Restoring previous memory policy: 4 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.741 EAL: Trying to obtain current memory policy. 00:04:33.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.741 EAL: Restoring previous memory policy: 4 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.741 EAL: Trying to obtain current memory policy. 00:04:33.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.741 EAL: Restoring previous memory policy: 4 00:04:33.741 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.741 EAL: request: mp_malloc_sync 00:04:33.741 EAL: No shared files mode enabled, IPC is disabled 00:04:33.741 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.000 EAL: request: mp_malloc_sync 00:04:34.000 EAL: No shared files mode enabled, IPC is disabled 00:04:34.000 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.000 EAL: Trying to obtain current memory policy. 00:04:34.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.000 EAL: Restoring previous memory policy: 4 00:04:34.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.000 EAL: request: mp_malloc_sync 00:04:34.000 EAL: No shared files mode enabled, IPC is disabled 00:04:34.000 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.000 EAL: request: mp_malloc_sync 00:04:34.000 EAL: No shared files mode enabled, IPC is disabled 00:04:34.000 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.000 EAL: Trying to obtain current memory policy. 00:04:34.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.000 EAL: Restoring previous memory policy: 4 00:04:34.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.000 EAL: request: mp_malloc_sync 00:04:34.000 EAL: No shared files mode enabled, IPC is disabled 00:04:34.000 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.000 EAL: request: mp_malloc_sync 00:04:34.000 EAL: No shared files mode enabled, IPC is disabled 00:04:34.000 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.000 EAL: Trying to obtain current memory policy. 00:04:34.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.000 EAL: Restoring previous memory policy: 4 00:04:34.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.001 EAL: request: mp_malloc_sync 00:04:34.001 EAL: No shared files mode enabled, IPC is disabled 00:04:34.001 EAL: Heap on socket 0 was expanded by 258MB 00:04:34.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.001 EAL: request: mp_malloc_sync 00:04:34.001 EAL: No shared files mode enabled, IPC is disabled 00:04:34.001 EAL: Heap on socket 0 was shrunk by 258MB 00:04:34.001 EAL: Trying to obtain current memory policy. 00:04:34.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.260 EAL: Restoring previous memory policy: 4 00:04:34.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.260 EAL: request: mp_malloc_sync 00:04:34.260 EAL: No shared files mode enabled, IPC is disabled 00:04:34.260 EAL: Heap on socket 0 was expanded by 514MB 00:04:34.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.260 EAL: request: mp_malloc_sync 00:04:34.260 EAL: No shared files mode enabled, IPC is disabled 00:04:34.260 EAL: Heap on socket 0 was shrunk by 514MB 00:04:34.260 EAL: Trying to obtain current memory policy. 00:04:34.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.260 EAL: Restoring previous memory policy: 4 00:04:34.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.260 EAL: request: mp_malloc_sync 00:04:34.260 EAL: No shared files mode enabled, IPC is disabled 00:04:34.260 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.519 passed 00:04:34.519 00:04:34.519 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.519 suites 1 1 n/a 0 0 00:04:34.519 tests 2 2 2 0 0 00:04:34.519 asserts 5344 5344 5344 0 n/a 00:04:34.519 00:04:34.519 Elapsed time = 0.735 seconds 00:04:34.519 EAL: request: mp_malloc_sync 00:04:34.519 EAL: No shared files mode enabled, IPC is disabled 00:04:34.519 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:34.519 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.519 EAL: request: mp_malloc_sync 00:04:34.519 EAL: No shared files mode enabled, IPC is disabled 00:04:34.519 EAL: Heap on socket 0 was shrunk by 2MB 00:04:34.519 EAL: No shared files mode enabled, IPC is disabled 00:04:34.519 EAL: No shared files mode enabled, IPC is disabled 00:04:34.519 EAL: No shared files mode enabled, IPC is disabled 00:04:34.519 00:04:34.519 real 0m0.930s 00:04:34.519 user 0m0.481s 00:04:34.519 sys 0m0.317s 00:04:34.519 04:19:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.519 04:19:37 -- common/autotest_common.sh@10 -- # set +x 00:04:34.519 ************************************ 00:04:34.519 END TEST env_vtophys 00:04:34.519 ************************************ 00:04:34.779 04:19:37 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:34.779 04:19:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.779 04:19:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.779 04:19:37 -- common/autotest_common.sh@10 -- # set +x 00:04:34.779 ************************************ 00:04:34.779 START TEST env_pci 00:04:34.779 ************************************ 00:04:34.779 04:19:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:34.779 00:04:34.779 00:04:34.779 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.779 http://cunit.sourceforge.net/ 00:04:34.779 00:04:34.779 00:04:34.779 Suite: pci 00:04:34.779 Test: pci_hook ...[2024-12-07 04:19:37.798669] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53674 has claimed it 00:04:34.779 passed 00:04:34.779 00:04:34.779 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.779 suites 1 1 n/a 0 0 00:04:34.779 tests 1 1 1 0 0 00:04:34.779 asserts 25 25 25 0 n/a 00:04:34.779 00:04:34.779 Elapsed time = 0.002 seconds 00:04:34.779 EAL: Cannot find device (10000:00:01.0) 00:04:34.779 EAL: Failed to attach device on primary process 00:04:34.779 00:04:34.779 real 0m0.019s 00:04:34.779 user 0m0.009s 00:04:34.779 sys 0m0.010s 00:04:34.779 04:19:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.779 04:19:37 -- common/autotest_common.sh@10 -- # set +x 00:04:34.779 ************************************ 00:04:34.779 END TEST env_pci 00:04:34.779 ************************************ 00:04:34.779 04:19:37 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:34.779 04:19:37 -- env/env.sh@15 -- # uname 00:04:34.779 04:19:37 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:34.779 04:19:37 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:34.779 04:19:37 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.779 04:19:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:34.779 04:19:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.779 04:19:37 -- common/autotest_common.sh@10 -- # set +x 00:04:34.779 ************************************ 00:04:34.779 START TEST env_dpdk_post_init 00:04:34.779 ************************************ 00:04:34.779 04:19:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.779 EAL: Detected CPU lcores: 10 00:04:34.779 EAL: Detected NUMA nodes: 1 00:04:34.779 EAL: Detected shared linkage of DPDK 00:04:34.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.779 EAL: Selected IOVA mode 'PA' 00:04:34.779 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.039 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:35.039 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:35.039 Starting DPDK initialization... 00:04:35.039 Starting SPDK post initialization... 00:04:35.039 SPDK NVMe probe 00:04:35.039 Attaching to 0000:00:06.0 00:04:35.039 Attaching to 0000:00:07.0 00:04:35.039 Attached to 0000:00:06.0 00:04:35.039 Attached to 0000:00:07.0 00:04:35.039 Cleaning up... 00:04:35.039 00:04:35.039 real 0m0.182s 00:04:35.039 user 0m0.045s 00:04:35.039 sys 0m0.037s 00:04:35.039 04:19:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.039 04:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:35.039 ************************************ 00:04:35.039 END TEST env_dpdk_post_init 00:04:35.039 ************************************ 00:04:35.039 04:19:38 -- env/env.sh@26 -- # uname 00:04:35.039 04:19:38 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.039 04:19:38 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.039 04:19:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.039 04:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.039 04:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:35.039 ************************************ 00:04:35.039 START TEST env_mem_callbacks 00:04:35.039 ************************************ 00:04:35.039 04:19:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.039 EAL: Detected CPU lcores: 10 00:04:35.039 EAL: Detected NUMA nodes: 1 00:04:35.039 EAL: Detected shared linkage of DPDK 00:04:35.039 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.039 EAL: Selected IOVA mode 'PA' 00:04:35.039 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.039 00:04:35.039 00:04:35.039 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.039 http://cunit.sourceforge.net/ 00:04:35.039 00:04:35.039 00:04:35.039 Suite: memory 00:04:35.039 Test: test ... 00:04:35.039 register 0x200000200000 2097152 00:04:35.039 malloc 3145728 00:04:35.039 register 0x200000400000 4194304 00:04:35.039 buf 0x200000500000 len 3145728 PASSED 00:04:35.039 malloc 64 00:04:35.039 buf 0x2000004fff40 len 64 PASSED 00:04:35.039 malloc 4194304 00:04:35.039 register 0x200000800000 6291456 00:04:35.039 buf 0x200000a00000 len 4194304 PASSED 00:04:35.039 free 0x200000500000 3145728 00:04:35.039 free 0x2000004fff40 64 00:04:35.039 unregister 0x200000400000 4194304 PASSED 00:04:35.039 free 0x200000a00000 4194304 00:04:35.039 unregister 0x200000800000 6291456 PASSED 00:04:35.039 malloc 8388608 00:04:35.039 register 0x200000400000 10485760 00:04:35.039 buf 0x200000600000 len 8388608 PASSED 00:04:35.039 free 0x200000600000 8388608 00:04:35.039 unregister 0x200000400000 10485760 PASSED 00:04:35.039 passed 00:04:35.039 00:04:35.039 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.039 suites 1 1 n/a 0 0 00:04:35.039 tests 1 1 1 0 0 00:04:35.039 asserts 15 15 15 0 n/a 00:04:35.039 00:04:35.039 Elapsed time = 0.008 seconds 00:04:35.039 00:04:35.039 real 0m0.147s 00:04:35.039 user 0m0.021s 00:04:35.039 sys 0m0.022s 00:04:35.039 04:19:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.039 ************************************ 00:04:35.039 END TEST env_mem_callbacks 00:04:35.039 ************************************ 00:04:35.039 04:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:35.299 00:04:35.299 real 0m1.987s 00:04:35.299 user 0m0.984s 00:04:35.299 sys 0m0.650s 00:04:35.299 04:19:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.299 04:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:35.299 ************************************ 00:04:35.299 END TEST env 00:04:35.299 ************************************ 00:04:35.299 04:19:38 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.299 04:19:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.299 04:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.299 04:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:35.299 ************************************ 00:04:35.299 START TEST rpc 00:04:35.299 ************************************ 00:04:35.299 04:19:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:35.299 * Looking for test storage... 00:04:35.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.299 04:19:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.299 04:19:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.299 04:19:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.299 04:19:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.299 04:19:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.299 04:19:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.299 04:19:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.299 04:19:38 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.299 04:19:38 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.299 04:19:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.299 04:19:38 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.299 04:19:38 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.299 04:19:38 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.299 04:19:38 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.299 04:19:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.299 04:19:38 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.299 04:19:38 -- scripts/common.sh@344 -- # : 1 00:04:35.299 04:19:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.299 04:19:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.299 04:19:38 -- scripts/common.sh@364 -- # decimal 1 00:04:35.299 04:19:38 -- scripts/common.sh@352 -- # local d=1 00:04:35.299 04:19:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.299 04:19:38 -- scripts/common.sh@354 -- # echo 1 00:04:35.299 04:19:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.299 04:19:38 -- scripts/common.sh@365 -- # decimal 2 00:04:35.299 04:19:38 -- scripts/common.sh@352 -- # local d=2 00:04:35.558 04:19:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.558 04:19:38 -- scripts/common.sh@354 -- # echo 2 00:04:35.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.558 04:19:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.558 04:19:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.558 04:19:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.558 04:19:38 -- scripts/common.sh@367 -- # return 0 00:04:35.558 04:19:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.558 04:19:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.558 --rc genhtml_branch_coverage=1 00:04:35.558 --rc genhtml_function_coverage=1 00:04:35.558 --rc genhtml_legend=1 00:04:35.558 --rc geninfo_all_blocks=1 00:04:35.558 --rc geninfo_unexecuted_blocks=1 00:04:35.558 00:04:35.558 ' 00:04:35.558 04:19:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.558 --rc genhtml_branch_coverage=1 00:04:35.558 --rc genhtml_function_coverage=1 00:04:35.558 --rc genhtml_legend=1 00:04:35.558 --rc geninfo_all_blocks=1 00:04:35.558 --rc geninfo_unexecuted_blocks=1 00:04:35.558 00:04:35.558 ' 00:04:35.558 04:19:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.558 --rc genhtml_branch_coverage=1 00:04:35.558 --rc genhtml_function_coverage=1 00:04:35.558 --rc genhtml_legend=1 00:04:35.558 --rc geninfo_all_blocks=1 00:04:35.558 --rc geninfo_unexecuted_blocks=1 00:04:35.558 00:04:35.558 ' 00:04:35.558 04:19:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.558 --rc genhtml_branch_coverage=1 00:04:35.558 --rc genhtml_function_coverage=1 00:04:35.558 --rc genhtml_legend=1 00:04:35.558 --rc geninfo_all_blocks=1 00:04:35.558 --rc geninfo_unexecuted_blocks=1 00:04:35.558 00:04:35.558 ' 00:04:35.558 04:19:38 -- rpc/rpc.sh@65 -- # spdk_pid=53796 00:04:35.558 04:19:38 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.558 04:19:38 -- rpc/rpc.sh@67 -- # waitforlisten 53796 00:04:35.558 04:19:38 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:35.558 04:19:38 -- common/autotest_common.sh@829 -- # '[' -z 53796 ']' 00:04:35.558 04:19:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.558 04:19:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.558 04:19:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.558 04:19:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.558 04:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:35.558 [2024-12-07 04:19:38.598267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:35.558 [2024-12-07 04:19:38.598545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53796 ] 00:04:35.558 [2024-12-07 04:19:38.732614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.558 [2024-12-07 04:19:38.792942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.558 [2024-12-07 04:19:38.793433] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:35.558 [2024-12-07 04:19:38.793488] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53796' to capture a snapshot of events at runtime. 00:04:35.558 [2024-12-07 04:19:38.793607] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53796 for offline analysis/debug. 00:04:35.558 [2024-12-07 04:19:38.793722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.494 04:19:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.494 04:19:39 -- common/autotest_common.sh@862 -- # return 0 00:04:36.494 04:19:39 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.494 04:19:39 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.494 04:19:39 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:36.494 04:19:39 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:36.494 04:19:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.494 04:19:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.494 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.494 ************************************ 00:04:36.494 START TEST rpc_integrity 00:04:36.494 ************************************ 00:04:36.494 04:19:39 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:36.494 04:19:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.494 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.494 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.494 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.494 04:19:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.494 04:19:39 -- rpc/rpc.sh@13 -- # jq length 00:04:36.494 04:19:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.494 04:19:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.494 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.494 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.494 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.494 04:19:39 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:36.494 04:19:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.494 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.494 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.494 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.494 04:19:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.494 { 00:04:36.494 "name": "Malloc0", 00:04:36.494 "aliases": [ 00:04:36.494 "d117162e-1511-4cd2-8e0e-3ca5945d8d45" 00:04:36.494 ], 00:04:36.494 "product_name": "Malloc disk", 00:04:36.494 "block_size": 512, 00:04:36.494 "num_blocks": 16384, 00:04:36.494 "uuid": "d117162e-1511-4cd2-8e0e-3ca5945d8d45", 00:04:36.494 "assigned_rate_limits": { 00:04:36.494 "rw_ios_per_sec": 0, 00:04:36.494 "rw_mbytes_per_sec": 0, 00:04:36.494 "r_mbytes_per_sec": 0, 00:04:36.494 "w_mbytes_per_sec": 0 00:04:36.495 }, 00:04:36.495 "claimed": false, 00:04:36.495 "zoned": false, 00:04:36.495 "supported_io_types": { 00:04:36.495 "read": true, 00:04:36.495 "write": true, 00:04:36.495 "unmap": true, 00:04:36.495 "write_zeroes": true, 00:04:36.495 "flush": true, 00:04:36.495 "reset": true, 00:04:36.495 "compare": false, 00:04:36.495 "compare_and_write": false, 00:04:36.495 "abort": true, 00:04:36.495 "nvme_admin": false, 00:04:36.495 "nvme_io": false 00:04:36.495 }, 00:04:36.495 "memory_domains": [ 00:04:36.495 { 00:04:36.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.495 "dma_device_type": 2 00:04:36.495 } 00:04:36.495 ], 00:04:36.495 "driver_specific": {} 00:04:36.495 } 00:04:36.495 ]' 00:04:36.495 04:19:39 -- rpc/rpc.sh@17 -- # jq length 00:04:36.764 04:19:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.764 04:19:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:36.764 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.764 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.764 [2024-12-07 04:19:39.755551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:36.764 [2024-12-07 04:19:39.755593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.764 [2024-12-07 04:19:39.755608] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb5b4c0 00:04:36.764 [2024-12-07 04:19:39.755616] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.764 [2024-12-07 04:19:39.757230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.764 [2024-12-07 04:19:39.757368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.764 Passthru0 00:04:36.764 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.764 04:19:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.764 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.764 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.764 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.764 04:19:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.764 { 00:04:36.764 "name": "Malloc0", 00:04:36.764 "aliases": [ 00:04:36.764 "d117162e-1511-4cd2-8e0e-3ca5945d8d45" 00:04:36.764 ], 00:04:36.764 "product_name": "Malloc disk", 00:04:36.764 "block_size": 512, 00:04:36.764 "num_blocks": 16384, 00:04:36.764 "uuid": "d117162e-1511-4cd2-8e0e-3ca5945d8d45", 00:04:36.764 "assigned_rate_limits": { 00:04:36.764 "rw_ios_per_sec": 0, 00:04:36.764 "rw_mbytes_per_sec": 0, 00:04:36.764 "r_mbytes_per_sec": 0, 00:04:36.764 "w_mbytes_per_sec": 0 00:04:36.764 }, 00:04:36.764 "claimed": true, 00:04:36.764 "claim_type": "exclusive_write", 00:04:36.764 "zoned": false, 00:04:36.764 "supported_io_types": { 00:04:36.764 "read": true, 00:04:36.764 "write": true, 00:04:36.764 "unmap": true, 00:04:36.764 "write_zeroes": true, 00:04:36.764 "flush": true, 00:04:36.764 "reset": true, 00:04:36.764 "compare": false, 00:04:36.764 "compare_and_write": false, 00:04:36.764 "abort": true, 00:04:36.764 "nvme_admin": false, 00:04:36.764 "nvme_io": false 00:04:36.764 }, 00:04:36.764 "memory_domains": [ 00:04:36.764 { 00:04:36.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.764 "dma_device_type": 2 00:04:36.764 } 00:04:36.764 ], 00:04:36.764 "driver_specific": {} 00:04:36.764 }, 00:04:36.764 { 00:04:36.764 "name": "Passthru0", 00:04:36.764 "aliases": [ 00:04:36.764 "b9ded279-661f-5308-b4f2-2f28991d9960" 00:04:36.764 ], 00:04:36.764 "product_name": "passthru", 00:04:36.764 "block_size": 512, 00:04:36.764 "num_blocks": 16384, 00:04:36.764 "uuid": "b9ded279-661f-5308-b4f2-2f28991d9960", 00:04:36.764 "assigned_rate_limits": { 00:04:36.764 "rw_ios_per_sec": 0, 00:04:36.764 "rw_mbytes_per_sec": 0, 00:04:36.764 "r_mbytes_per_sec": 0, 00:04:36.764 "w_mbytes_per_sec": 0 00:04:36.764 }, 00:04:36.764 "claimed": false, 00:04:36.764 "zoned": false, 00:04:36.764 "supported_io_types": { 00:04:36.764 "read": true, 00:04:36.764 "write": true, 00:04:36.764 "unmap": true, 00:04:36.764 "write_zeroes": true, 00:04:36.764 "flush": true, 00:04:36.764 "reset": true, 00:04:36.764 "compare": false, 00:04:36.765 "compare_and_write": false, 00:04:36.765 "abort": true, 00:04:36.765 "nvme_admin": false, 00:04:36.765 "nvme_io": false 00:04:36.765 }, 00:04:36.765 "memory_domains": [ 00:04:36.765 { 00:04:36.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.765 "dma_device_type": 2 00:04:36.765 } 00:04:36.765 ], 00:04:36.765 "driver_specific": { 00:04:36.765 "passthru": { 00:04:36.765 "name": "Passthru0", 00:04:36.765 "base_bdev_name": "Malloc0" 00:04:36.765 } 00:04:36.765 } 00:04:36.765 } 00:04:36.765 ]' 00:04:36.765 04:19:39 -- rpc/rpc.sh@21 -- # jq length 00:04:36.765 04:19:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.765 04:19:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.765 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.765 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.765 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.765 04:19:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:36.765 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.765 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.765 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.765 04:19:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.765 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.765 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.765 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.765 04:19:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.765 04:19:39 -- rpc/rpc.sh@26 -- # jq length 00:04:36.765 ************************************ 00:04:36.765 END TEST rpc_integrity 00:04:36.765 ************************************ 00:04:36.765 04:19:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.765 00:04:36.765 real 0m0.321s 00:04:36.765 user 0m0.212s 00:04:36.765 sys 0m0.039s 00:04:36.765 04:19:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.765 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.765 04:19:39 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:36.765 04:19:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.765 04:19:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.765 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.765 ************************************ 00:04:36.765 START TEST rpc_plugins 00:04:36.765 ************************************ 00:04:36.765 04:19:39 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:36.765 04:19:39 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:36.765 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.765 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.765 04:19:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.765 04:19:39 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:36.765 04:19:39 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:36.765 04:19:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.765 04:19:39 -- common/autotest_common.sh@10 -- # set +x 00:04:37.024 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.024 04:19:40 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.024 { 00:04:37.024 "name": "Malloc1", 00:04:37.024 "aliases": [ 00:04:37.024 "17197969-8995-4ae6-85a3-51a07ba9e3f8" 00:04:37.024 ], 00:04:37.024 "product_name": "Malloc disk", 00:04:37.024 "block_size": 4096, 00:04:37.024 "num_blocks": 256, 00:04:37.024 "uuid": "17197969-8995-4ae6-85a3-51a07ba9e3f8", 00:04:37.024 "assigned_rate_limits": { 00:04:37.024 "rw_ios_per_sec": 0, 00:04:37.024 "rw_mbytes_per_sec": 0, 00:04:37.024 "r_mbytes_per_sec": 0, 00:04:37.024 "w_mbytes_per_sec": 0 00:04:37.024 }, 00:04:37.024 "claimed": false, 00:04:37.024 "zoned": false, 00:04:37.024 "supported_io_types": { 00:04:37.024 "read": true, 00:04:37.024 "write": true, 00:04:37.024 "unmap": true, 00:04:37.024 "write_zeroes": true, 00:04:37.024 "flush": true, 00:04:37.024 "reset": true, 00:04:37.024 "compare": false, 00:04:37.024 "compare_and_write": false, 00:04:37.024 "abort": true, 00:04:37.024 "nvme_admin": false, 00:04:37.024 "nvme_io": false 00:04:37.024 }, 00:04:37.024 "memory_domains": [ 00:04:37.024 { 00:04:37.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.024 "dma_device_type": 2 00:04:37.024 } 00:04:37.024 ], 00:04:37.024 "driver_specific": {} 00:04:37.024 } 00:04:37.024 ]' 00:04:37.024 04:19:40 -- rpc/rpc.sh@32 -- # jq length 00:04:37.024 04:19:40 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.024 04:19:40 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.024 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.024 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.024 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.024 04:19:40 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.024 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.024 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.024 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.024 04:19:40 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.024 04:19:40 -- rpc/rpc.sh@36 -- # jq length 00:04:37.024 ************************************ 00:04:37.024 END TEST rpc_plugins 00:04:37.024 ************************************ 00:04:37.024 04:19:40 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.024 00:04:37.024 real 0m0.160s 00:04:37.024 user 0m0.110s 00:04:37.024 sys 0m0.015s 00:04:37.024 04:19:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.024 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.024 04:19:40 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.024 04:19:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.024 04:19:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.024 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.024 ************************************ 00:04:37.024 START TEST rpc_trace_cmd_test 00:04:37.024 ************************************ 00:04:37.024 04:19:40 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:37.024 04:19:40 -- rpc/rpc.sh@40 -- # local info 00:04:37.024 04:19:40 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.024 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.024 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.024 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.024 04:19:40 -- rpc/rpc.sh@42 -- # info='{ 00:04:37.024 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53796", 00:04:37.024 "tpoint_group_mask": "0x8", 00:04:37.024 "iscsi_conn": { 00:04:37.024 "mask": "0x2", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "scsi": { 00:04:37.024 "mask": "0x4", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "bdev": { 00:04:37.024 "mask": "0x8", 00:04:37.024 "tpoint_mask": "0xffffffffffffffff" 00:04:37.024 }, 00:04:37.024 "nvmf_rdma": { 00:04:37.024 "mask": "0x10", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "nvmf_tcp": { 00:04:37.024 "mask": "0x20", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "ftl": { 00:04:37.024 "mask": "0x40", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "blobfs": { 00:04:37.024 "mask": "0x80", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "dsa": { 00:04:37.024 "mask": "0x200", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "thread": { 00:04:37.024 "mask": "0x400", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "nvme_pcie": { 00:04:37.024 "mask": "0x800", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "iaa": { 00:04:37.024 "mask": "0x1000", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "nvme_tcp": { 00:04:37.024 "mask": "0x2000", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 }, 00:04:37.024 "bdev_nvme": { 00:04:37.024 "mask": "0x4000", 00:04:37.024 "tpoint_mask": "0x0" 00:04:37.024 } 00:04:37.024 }' 00:04:37.024 04:19:40 -- rpc/rpc.sh@43 -- # jq length 00:04:37.024 04:19:40 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:37.024 04:19:40 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.283 04:19:40 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.283 04:19:40 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.283 04:19:40 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.283 04:19:40 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.283 04:19:40 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.283 04:19:40 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.283 ************************************ 00:04:37.283 END TEST rpc_trace_cmd_test 00:04:37.283 ************************************ 00:04:37.283 04:19:40 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.283 00:04:37.283 real 0m0.260s 00:04:37.283 user 0m0.223s 00:04:37.283 sys 0m0.028s 00:04:37.283 04:19:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.283 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 04:19:40 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.283 04:19:40 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.283 04:19:40 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.283 04:19:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.283 04:19:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.283 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 ************************************ 00:04:37.283 START TEST rpc_daemon_integrity 00:04:37.283 ************************************ 00:04:37.283 04:19:40 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:37.283 04:19:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.283 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.283 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.283 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.283 04:19:40 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.283 04:19:40 -- rpc/rpc.sh@13 -- # jq length 00:04:37.543 04:19:40 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.543 04:19:40 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.543 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.543 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.543 04:19:40 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.543 04:19:40 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.543 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.543 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.543 04:19:40 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.543 { 00:04:37.543 "name": "Malloc2", 00:04:37.543 "aliases": [ 00:04:37.543 "04f10bfc-d7c3-40db-bb8b-8f39846a2f1d" 00:04:37.543 ], 00:04:37.543 "product_name": "Malloc disk", 00:04:37.543 "block_size": 512, 00:04:37.543 "num_blocks": 16384, 00:04:37.543 "uuid": "04f10bfc-d7c3-40db-bb8b-8f39846a2f1d", 00:04:37.543 "assigned_rate_limits": { 00:04:37.543 "rw_ios_per_sec": 0, 00:04:37.543 "rw_mbytes_per_sec": 0, 00:04:37.543 "r_mbytes_per_sec": 0, 00:04:37.543 "w_mbytes_per_sec": 0 00:04:37.543 }, 00:04:37.543 "claimed": false, 00:04:37.543 "zoned": false, 00:04:37.543 "supported_io_types": { 00:04:37.543 "read": true, 00:04:37.543 "write": true, 00:04:37.543 "unmap": true, 00:04:37.543 "write_zeroes": true, 00:04:37.543 "flush": true, 00:04:37.543 "reset": true, 00:04:37.543 "compare": false, 00:04:37.543 "compare_and_write": false, 00:04:37.543 "abort": true, 00:04:37.543 "nvme_admin": false, 00:04:37.543 "nvme_io": false 00:04:37.543 }, 00:04:37.543 "memory_domains": [ 00:04:37.543 { 00:04:37.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.543 "dma_device_type": 2 00:04:37.543 } 00:04:37.543 ], 00:04:37.543 "driver_specific": {} 00:04:37.543 } 00:04:37.543 ]' 00:04:37.543 04:19:40 -- rpc/rpc.sh@17 -- # jq length 00:04:37.543 04:19:40 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.543 04:19:40 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:37.543 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.543 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 [2024-12-07 04:19:40.648233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:37.543 [2024-12-07 04:19:40.648275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.543 [2024-12-07 04:19:40.648290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb5bc40 00:04:37.543 [2024-12-07 04:19:40.648297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.543 [2024-12-07 04:19:40.649596] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.543 [2024-12-07 04:19:40.649627] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.543 Passthru0 00:04:37.543 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.543 04:19:40 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.543 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.543 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.543 04:19:40 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.543 { 00:04:37.543 "name": "Malloc2", 00:04:37.543 "aliases": [ 00:04:37.543 "04f10bfc-d7c3-40db-bb8b-8f39846a2f1d" 00:04:37.543 ], 00:04:37.543 "product_name": "Malloc disk", 00:04:37.543 "block_size": 512, 00:04:37.543 "num_blocks": 16384, 00:04:37.543 "uuid": "04f10bfc-d7c3-40db-bb8b-8f39846a2f1d", 00:04:37.543 "assigned_rate_limits": { 00:04:37.543 "rw_ios_per_sec": 0, 00:04:37.543 "rw_mbytes_per_sec": 0, 00:04:37.543 "r_mbytes_per_sec": 0, 00:04:37.543 "w_mbytes_per_sec": 0 00:04:37.543 }, 00:04:37.543 "claimed": true, 00:04:37.543 "claim_type": "exclusive_write", 00:04:37.543 "zoned": false, 00:04:37.543 "supported_io_types": { 00:04:37.543 "read": true, 00:04:37.543 "write": true, 00:04:37.543 "unmap": true, 00:04:37.543 "write_zeroes": true, 00:04:37.543 "flush": true, 00:04:37.543 "reset": true, 00:04:37.543 "compare": false, 00:04:37.543 "compare_and_write": false, 00:04:37.543 "abort": true, 00:04:37.543 "nvme_admin": false, 00:04:37.543 "nvme_io": false 00:04:37.543 }, 00:04:37.543 "memory_domains": [ 00:04:37.543 { 00:04:37.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.543 "dma_device_type": 2 00:04:37.543 } 00:04:37.543 ], 00:04:37.543 "driver_specific": {} 00:04:37.543 }, 00:04:37.543 { 00:04:37.543 "name": "Passthru0", 00:04:37.543 "aliases": [ 00:04:37.543 "4e8b89ba-81e6-5fac-9ac9-068c0e213821" 00:04:37.543 ], 00:04:37.543 "product_name": "passthru", 00:04:37.543 "block_size": 512, 00:04:37.543 "num_blocks": 16384, 00:04:37.543 "uuid": "4e8b89ba-81e6-5fac-9ac9-068c0e213821", 00:04:37.543 "assigned_rate_limits": { 00:04:37.543 "rw_ios_per_sec": 0, 00:04:37.543 "rw_mbytes_per_sec": 0, 00:04:37.543 "r_mbytes_per_sec": 0, 00:04:37.543 "w_mbytes_per_sec": 0 00:04:37.543 }, 00:04:37.543 "claimed": false, 00:04:37.543 "zoned": false, 00:04:37.543 "supported_io_types": { 00:04:37.543 "read": true, 00:04:37.543 "write": true, 00:04:37.543 "unmap": true, 00:04:37.543 "write_zeroes": true, 00:04:37.543 "flush": true, 00:04:37.543 "reset": true, 00:04:37.543 "compare": false, 00:04:37.543 "compare_and_write": false, 00:04:37.543 "abort": true, 00:04:37.543 "nvme_admin": false, 00:04:37.543 "nvme_io": false 00:04:37.543 }, 00:04:37.543 "memory_domains": [ 00:04:37.543 { 00:04:37.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.543 "dma_device_type": 2 00:04:37.543 } 00:04:37.543 ], 00:04:37.543 "driver_specific": { 00:04:37.543 "passthru": { 00:04:37.543 "name": "Passthru0", 00:04:37.543 "base_bdev_name": "Malloc2" 00:04:37.543 } 00:04:37.543 } 00:04:37.543 } 00:04:37.543 ]' 00:04:37.543 04:19:40 -- rpc/rpc.sh@21 -- # jq length 00:04:37.543 04:19:40 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.543 04:19:40 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.543 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.543 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.543 04:19:40 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:37.543 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.543 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.543 04:19:40 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.543 04:19:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.543 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.543 04:19:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.543 04:19:40 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.543 04:19:40 -- rpc/rpc.sh@26 -- # jq length 00:04:37.803 ************************************ 00:04:37.803 END TEST rpc_daemon_integrity 00:04:37.803 ************************************ 00:04:37.803 04:19:40 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.803 00:04:37.803 real 0m0.315s 00:04:37.803 user 0m0.207s 00:04:37.803 sys 0m0.040s 00:04:37.803 04:19:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.803 04:19:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.803 04:19:40 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.803 04:19:40 -- rpc/rpc.sh@84 -- # killprocess 53796 00:04:37.803 04:19:40 -- common/autotest_common.sh@936 -- # '[' -z 53796 ']' 00:04:37.803 04:19:40 -- common/autotest_common.sh@940 -- # kill -0 53796 00:04:37.803 04:19:40 -- common/autotest_common.sh@941 -- # uname 00:04:37.803 04:19:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.803 04:19:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 53796 00:04:37.803 killing process with pid 53796 00:04:37.803 04:19:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.803 04:19:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.803 04:19:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 53796' 00:04:37.803 04:19:40 -- common/autotest_common.sh@955 -- # kill 53796 00:04:37.803 04:19:40 -- common/autotest_common.sh@960 -- # wait 53796 00:04:38.062 00:04:38.062 real 0m2.800s 00:04:38.062 user 0m3.744s 00:04:38.062 sys 0m0.555s 00:04:38.062 ************************************ 00:04:38.062 END TEST rpc 00:04:38.062 ************************************ 00:04:38.062 04:19:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.062 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 04:19:41 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:38.062 04:19:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.062 04:19:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.062 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.062 ************************************ 00:04:38.062 START TEST rpc_client 00:04:38.062 ************************************ 00:04:38.062 04:19:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:38.062 * Looking for test storage... 00:04:38.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:38.062 04:19:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.062 04:19:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.062 04:19:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:38.321 04:19:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:38.321 04:19:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:38.321 04:19:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:38.321 04:19:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:38.321 04:19:41 -- scripts/common.sh@335 -- # IFS=.-: 00:04:38.321 04:19:41 -- scripts/common.sh@335 -- # read -ra ver1 00:04:38.321 04:19:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.321 04:19:41 -- scripts/common.sh@336 -- # read -ra ver2 00:04:38.321 04:19:41 -- scripts/common.sh@337 -- # local 'op=<' 00:04:38.321 04:19:41 -- scripts/common.sh@339 -- # ver1_l=2 00:04:38.321 04:19:41 -- scripts/common.sh@340 -- # ver2_l=1 00:04:38.321 04:19:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:38.321 04:19:41 -- scripts/common.sh@343 -- # case "$op" in 00:04:38.321 04:19:41 -- scripts/common.sh@344 -- # : 1 00:04:38.321 04:19:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:38.321 04:19:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.321 04:19:41 -- scripts/common.sh@364 -- # decimal 1 00:04:38.321 04:19:41 -- scripts/common.sh@352 -- # local d=1 00:04:38.321 04:19:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.321 04:19:41 -- scripts/common.sh@354 -- # echo 1 00:04:38.321 04:19:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:38.321 04:19:41 -- scripts/common.sh@365 -- # decimal 2 00:04:38.321 04:19:41 -- scripts/common.sh@352 -- # local d=2 00:04:38.321 04:19:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.321 04:19:41 -- scripts/common.sh@354 -- # echo 2 00:04:38.321 04:19:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:38.321 04:19:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:38.321 04:19:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:38.321 04:19:41 -- scripts/common.sh@367 -- # return 0 00:04:38.321 04:19:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.321 04:19:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.321 --rc genhtml_branch_coverage=1 00:04:38.321 --rc genhtml_function_coverage=1 00:04:38.321 --rc genhtml_legend=1 00:04:38.321 --rc geninfo_all_blocks=1 00:04:38.321 --rc geninfo_unexecuted_blocks=1 00:04:38.321 00:04:38.321 ' 00:04:38.322 04:19:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:38.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.322 --rc genhtml_branch_coverage=1 00:04:38.322 --rc genhtml_function_coverage=1 00:04:38.322 --rc genhtml_legend=1 00:04:38.322 --rc geninfo_all_blocks=1 00:04:38.322 --rc geninfo_unexecuted_blocks=1 00:04:38.322 00:04:38.322 ' 00:04:38.322 04:19:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:38.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.322 --rc genhtml_branch_coverage=1 00:04:38.322 --rc genhtml_function_coverage=1 00:04:38.322 --rc genhtml_legend=1 00:04:38.322 --rc geninfo_all_blocks=1 00:04:38.322 --rc geninfo_unexecuted_blocks=1 00:04:38.322 00:04:38.322 ' 00:04:38.322 04:19:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:38.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.322 --rc genhtml_branch_coverage=1 00:04:38.322 --rc genhtml_function_coverage=1 00:04:38.322 --rc genhtml_legend=1 00:04:38.322 --rc geninfo_all_blocks=1 00:04:38.322 --rc geninfo_unexecuted_blocks=1 00:04:38.322 00:04:38.322 ' 00:04:38.322 04:19:41 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:38.322 OK 00:04:38.322 04:19:41 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:38.322 00:04:38.322 real 0m0.197s 00:04:38.322 user 0m0.122s 00:04:38.322 sys 0m0.085s 00:04:38.322 04:19:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.322 ************************************ 00:04:38.322 END TEST rpc_client 00:04:38.322 ************************************ 00:04:38.322 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.322 04:19:41 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:38.322 04:19:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.322 04:19:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.322 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.322 ************************************ 00:04:38.322 START TEST json_config 00:04:38.322 ************************************ 00:04:38.322 04:19:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:38.322 04:19:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.322 04:19:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:38.322 04:19:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.581 04:19:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:38.581 04:19:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:38.582 04:19:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:38.582 04:19:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:38.582 04:19:41 -- scripts/common.sh@335 -- # IFS=.-: 00:04:38.582 04:19:41 -- scripts/common.sh@335 -- # read -ra ver1 00:04:38.582 04:19:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.582 04:19:41 -- scripts/common.sh@336 -- # read -ra ver2 00:04:38.582 04:19:41 -- scripts/common.sh@337 -- # local 'op=<' 00:04:38.582 04:19:41 -- scripts/common.sh@339 -- # ver1_l=2 00:04:38.582 04:19:41 -- scripts/common.sh@340 -- # ver2_l=1 00:04:38.582 04:19:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:38.582 04:19:41 -- scripts/common.sh@343 -- # case "$op" in 00:04:38.582 04:19:41 -- scripts/common.sh@344 -- # : 1 00:04:38.582 04:19:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:38.582 04:19:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.582 04:19:41 -- scripts/common.sh@364 -- # decimal 1 00:04:38.582 04:19:41 -- scripts/common.sh@352 -- # local d=1 00:04:38.582 04:19:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.582 04:19:41 -- scripts/common.sh@354 -- # echo 1 00:04:38.582 04:19:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:38.582 04:19:41 -- scripts/common.sh@365 -- # decimal 2 00:04:38.582 04:19:41 -- scripts/common.sh@352 -- # local d=2 00:04:38.582 04:19:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.582 04:19:41 -- scripts/common.sh@354 -- # echo 2 00:04:38.582 04:19:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:38.582 04:19:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:38.582 04:19:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:38.582 04:19:41 -- scripts/common.sh@367 -- # return 0 00:04:38.582 04:19:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.582 04:19:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.582 --rc genhtml_branch_coverage=1 00:04:38.582 --rc genhtml_function_coverage=1 00:04:38.582 --rc genhtml_legend=1 00:04:38.582 --rc geninfo_all_blocks=1 00:04:38.582 --rc geninfo_unexecuted_blocks=1 00:04:38.582 00:04:38.582 ' 00:04:38.582 04:19:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.582 --rc genhtml_branch_coverage=1 00:04:38.582 --rc genhtml_function_coverage=1 00:04:38.582 --rc genhtml_legend=1 00:04:38.582 --rc geninfo_all_blocks=1 00:04:38.582 --rc geninfo_unexecuted_blocks=1 00:04:38.582 00:04:38.582 ' 00:04:38.582 04:19:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.582 --rc genhtml_branch_coverage=1 00:04:38.582 --rc genhtml_function_coverage=1 00:04:38.582 --rc genhtml_legend=1 00:04:38.582 --rc geninfo_all_blocks=1 00:04:38.582 --rc geninfo_unexecuted_blocks=1 00:04:38.582 00:04:38.582 ' 00:04:38.582 04:19:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.582 --rc genhtml_branch_coverage=1 00:04:38.582 --rc genhtml_function_coverage=1 00:04:38.582 --rc genhtml_legend=1 00:04:38.582 --rc geninfo_all_blocks=1 00:04:38.582 --rc geninfo_unexecuted_blocks=1 00:04:38.582 00:04:38.582 ' 00:04:38.582 04:19:41 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:38.582 04:19:41 -- nvmf/common.sh@7 -- # uname -s 00:04:38.582 04:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.582 04:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.582 04:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.582 04:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.582 04:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.582 04:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.582 04:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.582 04:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.582 04:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.582 04:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.582 04:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:04:38.582 04:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:04:38.582 04:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.582 04:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.582 04:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.582 04:19:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:38.582 04:19:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.582 04:19:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.582 04:19:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.582 04:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.582 04:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.582 04:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.582 04:19:41 -- paths/export.sh@5 -- # export PATH 00:04:38.582 04:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.582 04:19:41 -- nvmf/common.sh@46 -- # : 0 00:04:38.582 04:19:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:38.582 04:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:38.582 04:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:38.582 04:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.582 04:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.582 04:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:38.582 04:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:38.582 04:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:38.582 04:19:41 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:38.582 04:19:41 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:38.582 04:19:41 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:38.582 04:19:41 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:38.582 04:19:41 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:38.582 04:19:41 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:38.582 04:19:41 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:38.582 04:19:41 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:38.582 04:19:41 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:38.582 04:19:41 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:38.582 04:19:41 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:38.582 04:19:41 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:38.582 04:19:41 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:38.582 04:19:41 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.582 04:19:41 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:38.582 INFO: JSON configuration test init 00:04:38.582 04:19:41 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:38.582 04:19:41 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:38.582 04:19:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.582 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.582 04:19:41 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:38.582 04:19:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.582 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.582 Waiting for target to run... 00:04:38.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.582 04:19:41 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:38.582 04:19:41 -- json_config/json_config.sh@98 -- # local app=target 00:04:38.582 04:19:41 -- json_config/json_config.sh@99 -- # shift 00:04:38.582 04:19:41 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:38.582 04:19:41 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:38.582 04:19:41 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:38.582 04:19:41 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:38.582 04:19:41 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:38.582 04:19:41 -- json_config/json_config.sh@111 -- # app_pid[$app]=54049 00:04:38.582 04:19:41 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:38.582 04:19:41 -- json_config/json_config.sh@114 -- # waitforlisten 54049 /var/tmp/spdk_tgt.sock 00:04:38.582 04:19:41 -- common/autotest_common.sh@829 -- # '[' -z 54049 ']' 00:04:38.582 04:19:41 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:38.582 04:19:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.582 04:19:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.583 04:19:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.583 04:19:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.583 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.583 [2024-12-07 04:19:41.713699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.583 [2024-12-07 04:19:41.713989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54049 ] 00:04:38.847 [2024-12-07 04:19:42.035188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.105 [2024-12-07 04:19:42.089321] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.105 [2024-12-07 04:19:42.089767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.673 04:19:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.673 04:19:42 -- common/autotest_common.sh@862 -- # return 0 00:04:39.673 04:19:42 -- json_config/json_config.sh@115 -- # echo '' 00:04:39.673 00:04:39.673 04:19:42 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:39.673 04:19:42 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:39.673 04:19:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.673 04:19:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.673 04:19:42 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:39.673 04:19:42 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:39.673 04:19:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.673 04:19:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.673 04:19:42 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:39.673 04:19:42 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:39.673 04:19:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:39.932 04:19:43 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:39.932 04:19:43 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:39.932 04:19:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.932 04:19:43 -- common/autotest_common.sh@10 -- # set +x 00:04:39.932 04:19:43 -- json_config/json_config.sh@48 -- # local ret=0 00:04:39.932 04:19:43 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:39.932 04:19:43 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:39.932 04:19:43 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:39.932 04:19:43 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:39.932 04:19:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:40.190 04:19:43 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:40.190 04:19:43 -- json_config/json_config.sh@51 -- # local get_types 00:04:40.190 04:19:43 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:40.190 04:19:43 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:40.190 04:19:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.190 04:19:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 04:19:43 -- json_config/json_config.sh@58 -- # return 0 00:04:40.449 04:19:43 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:40.449 04:19:43 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:40.449 04:19:43 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:40.449 04:19:43 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:40.449 04:19:43 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:40.449 04:19:43 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:40.449 04:19:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.449 04:19:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 04:19:43 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:40.449 04:19:43 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:40.449 04:19:43 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:40.449 04:19:43 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:40.449 04:19:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:40.707 MallocForNvmf0 00:04:40.707 04:19:43 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:40.707 04:19:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:40.707 MallocForNvmf1 00:04:40.966 04:19:43 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:40.966 04:19:43 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:40.966 [2024-12-07 04:19:44.204042] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.225 04:19:44 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:41.225 04:19:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:41.484 04:19:44 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:41.484 04:19:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:41.484 04:19:44 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:41.484 04:19:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:41.743 04:19:44 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:41.743 04:19:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:42.003 [2024-12-07 04:19:45.120352] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:42.003 04:19:45 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:42.003 04:19:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.003 04:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.003 04:19:45 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:42.003 04:19:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.003 04:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.003 04:19:45 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:42.003 04:19:45 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:42.003 04:19:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:42.262 MallocBdevForConfigChangeCheck 00:04:42.262 04:19:45 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:42.262 04:19:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.262 04:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:42.262 04:19:45 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:42.262 04:19:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.828 INFO: shutting down applications... 00:04:42.828 04:19:45 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:42.828 04:19:45 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:42.828 04:19:45 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:42.828 04:19:45 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:42.828 04:19:45 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:43.085 Calling clear_iscsi_subsystem 00:04:43.085 Calling clear_nvmf_subsystem 00:04:43.085 Calling clear_nbd_subsystem 00:04:43.085 Calling clear_ublk_subsystem 00:04:43.085 Calling clear_vhost_blk_subsystem 00:04:43.085 Calling clear_vhost_scsi_subsystem 00:04:43.085 Calling clear_scheduler_subsystem 00:04:43.085 Calling clear_bdev_subsystem 00:04:43.085 Calling clear_accel_subsystem 00:04:43.085 Calling clear_vmd_subsystem 00:04:43.085 Calling clear_sock_subsystem 00:04:43.085 Calling clear_iobuf_subsystem 00:04:43.085 04:19:46 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:43.085 04:19:46 -- json_config/json_config.sh@396 -- # count=100 00:04:43.085 04:19:46 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:43.085 04:19:46 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.085 04:19:46 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:43.085 04:19:46 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:43.364 04:19:46 -- json_config/json_config.sh@398 -- # break 00:04:43.364 04:19:46 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:43.364 04:19:46 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:43.364 04:19:46 -- json_config/json_config.sh@120 -- # local app=target 00:04:43.364 04:19:46 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:43.364 04:19:46 -- json_config/json_config.sh@124 -- # [[ -n 54049 ]] 00:04:43.364 04:19:46 -- json_config/json_config.sh@127 -- # kill -SIGINT 54049 00:04:43.364 04:19:46 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:43.364 04:19:46 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:43.364 04:19:46 -- json_config/json_config.sh@130 -- # kill -0 54049 00:04:43.364 04:19:46 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:43.933 04:19:47 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:43.933 04:19:47 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:43.933 SPDK target shutdown done 00:04:43.933 INFO: relaunching applications... 00:04:43.933 04:19:47 -- json_config/json_config.sh@130 -- # kill -0 54049 00:04:43.933 04:19:47 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:43.933 04:19:47 -- json_config/json_config.sh@132 -- # break 00:04:43.933 04:19:47 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:43.933 04:19:47 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:43.933 04:19:47 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:43.933 04:19:47 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.933 04:19:47 -- json_config/json_config.sh@98 -- # local app=target 00:04:43.933 04:19:47 -- json_config/json_config.sh@99 -- # shift 00:04:43.933 04:19:47 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:43.933 04:19:47 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:43.933 04:19:47 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:43.933 04:19:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:43.933 04:19:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:43.933 04:19:47 -- json_config/json_config.sh@111 -- # app_pid[$app]=54234 00:04:43.933 04:19:47 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.933 04:19:47 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:43.933 Waiting for target to run... 00:04:43.933 04:19:47 -- json_config/json_config.sh@114 -- # waitforlisten 54234 /var/tmp/spdk_tgt.sock 00:04:43.933 04:19:47 -- common/autotest_common.sh@829 -- # '[' -z 54234 ']' 00:04:43.933 04:19:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.933 04:19:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.933 04:19:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.933 04:19:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.933 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.933 [2024-12-07 04:19:47.125832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.933 [2024-12-07 04:19:47.125933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54234 ] 00:04:44.502 [2024-12-07 04:19:47.432004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.502 [2024-12-07 04:19:47.475648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.502 [2024-12-07 04:19:47.475885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.761 [2024-12-07 04:19:47.775267] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.761 [2024-12-07 04:19:47.807328] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:45.020 00:04:45.020 INFO: Checking if target configuration is the same... 00:04:45.020 04:19:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.020 04:19:48 -- common/autotest_common.sh@862 -- # return 0 00:04:45.020 04:19:48 -- json_config/json_config.sh@115 -- # echo '' 00:04:45.020 04:19:48 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:45.020 04:19:48 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:45.020 04:19:48 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:45.020 04:19:48 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.020 04:19:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.020 + '[' 2 -ne 2 ']' 00:04:45.020 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:45.020 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:45.020 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:45.020 +++ basename /dev/fd/62 00:04:45.020 ++ mktemp /tmp/62.XXX 00:04:45.020 + tmp_file_1=/tmp/62.VHq 00:04:45.020 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.020 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:45.020 + tmp_file_2=/tmp/spdk_tgt_config.json.ol4 00:04:45.020 + ret=0 00:04:45.020 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:45.293 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:45.564 + diff -u /tmp/62.VHq /tmp/spdk_tgt_config.json.ol4 00:04:45.564 INFO: JSON config files are the same 00:04:45.564 + echo 'INFO: JSON config files are the same' 00:04:45.564 + rm /tmp/62.VHq /tmp/spdk_tgt_config.json.ol4 00:04:45.564 + exit 0 00:04:45.564 INFO: changing configuration and checking if this can be detected... 00:04:45.564 04:19:48 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:45.564 04:19:48 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:45.564 04:19:48 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:45.564 04:19:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:45.564 04:19:48 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:45.564 04:19:48 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.564 04:19:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.564 + '[' 2 -ne 2 ']' 00:04:45.564 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:45.564 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:45.823 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:45.823 +++ basename /dev/fd/62 00:04:45.823 ++ mktemp /tmp/62.XXX 00:04:45.823 + tmp_file_1=/tmp/62.Emv 00:04:45.823 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.823 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:45.823 + tmp_file_2=/tmp/spdk_tgt_config.json.glX 00:04:45.823 + ret=0 00:04:45.823 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:46.082 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:46.082 + diff -u /tmp/62.Emv /tmp/spdk_tgt_config.json.glX 00:04:46.082 + ret=1 00:04:46.082 + echo '=== Start of file: /tmp/62.Emv ===' 00:04:46.082 + cat /tmp/62.Emv 00:04:46.082 + echo '=== End of file: /tmp/62.Emv ===' 00:04:46.082 + echo '' 00:04:46.082 + echo '=== Start of file: /tmp/spdk_tgt_config.json.glX ===' 00:04:46.082 + cat /tmp/spdk_tgt_config.json.glX 00:04:46.082 + echo '=== End of file: /tmp/spdk_tgt_config.json.glX ===' 00:04:46.082 + echo '' 00:04:46.082 + rm /tmp/62.Emv /tmp/spdk_tgt_config.json.glX 00:04:46.082 + exit 1 00:04:46.082 INFO: configuration change detected. 00:04:46.082 04:19:49 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:46.082 04:19:49 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:46.082 04:19:49 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:46.082 04:19:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.082 04:19:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.082 04:19:49 -- json_config/json_config.sh@360 -- # local ret=0 00:04:46.082 04:19:49 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:46.082 04:19:49 -- json_config/json_config.sh@370 -- # [[ -n 54234 ]] 00:04:46.082 04:19:49 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:46.082 04:19:49 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:46.082 04:19:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.082 04:19:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.082 04:19:49 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:46.082 04:19:49 -- json_config/json_config.sh@246 -- # uname -s 00:04:46.082 04:19:49 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:46.082 04:19:49 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:46.082 04:19:49 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:46.082 04:19:49 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:46.082 04:19:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.082 04:19:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.082 04:19:49 -- json_config/json_config.sh@376 -- # killprocess 54234 00:04:46.082 04:19:49 -- common/autotest_common.sh@936 -- # '[' -z 54234 ']' 00:04:46.082 04:19:49 -- common/autotest_common.sh@940 -- # kill -0 54234 00:04:46.082 04:19:49 -- common/autotest_common.sh@941 -- # uname 00:04:46.082 04:19:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.082 04:19:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54234 00:04:46.341 killing process with pid 54234 00:04:46.341 04:19:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.341 04:19:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.341 04:19:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54234' 00:04:46.341 04:19:49 -- common/autotest_common.sh@955 -- # kill 54234 00:04:46.341 04:19:49 -- common/autotest_common.sh@960 -- # wait 54234 00:04:46.341 04:19:49 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.341 04:19:49 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:46.341 04:19:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.341 04:19:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.600 INFO: Success 00:04:46.600 04:19:49 -- json_config/json_config.sh@381 -- # return 0 00:04:46.600 04:19:49 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:46.600 00:04:46.600 real 0m8.131s 00:04:46.600 user 0m11.794s 00:04:46.600 sys 0m1.361s 00:04:46.600 ************************************ 00:04:46.600 END TEST json_config 00:04:46.600 ************************************ 00:04:46.600 04:19:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.600 04:19:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.600 04:19:49 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.600 04:19:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.600 04:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.600 04:19:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.600 ************************************ 00:04:46.600 START TEST json_config_extra_key 00:04:46.600 ************************************ 00:04:46.601 04:19:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.601 04:19:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:46.601 04:19:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:46.601 04:19:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:46.601 04:19:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:46.601 04:19:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:46.601 04:19:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:46.601 04:19:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:46.601 04:19:49 -- scripts/common.sh@335 -- # IFS=.-: 00:04:46.601 04:19:49 -- scripts/common.sh@335 -- # read -ra ver1 00:04:46.601 04:19:49 -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.601 04:19:49 -- scripts/common.sh@336 -- # read -ra ver2 00:04:46.601 04:19:49 -- scripts/common.sh@337 -- # local 'op=<' 00:04:46.601 04:19:49 -- scripts/common.sh@339 -- # ver1_l=2 00:04:46.601 04:19:49 -- scripts/common.sh@340 -- # ver2_l=1 00:04:46.601 04:19:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:46.601 04:19:49 -- scripts/common.sh@343 -- # case "$op" in 00:04:46.601 04:19:49 -- scripts/common.sh@344 -- # : 1 00:04:46.601 04:19:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:46.601 04:19:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.601 04:19:49 -- scripts/common.sh@364 -- # decimal 1 00:04:46.601 04:19:49 -- scripts/common.sh@352 -- # local d=1 00:04:46.601 04:19:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.601 04:19:49 -- scripts/common.sh@354 -- # echo 1 00:04:46.601 04:19:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:46.601 04:19:49 -- scripts/common.sh@365 -- # decimal 2 00:04:46.601 04:19:49 -- scripts/common.sh@352 -- # local d=2 00:04:46.601 04:19:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.601 04:19:49 -- scripts/common.sh@354 -- # echo 2 00:04:46.601 04:19:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:46.601 04:19:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:46.601 04:19:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:46.601 04:19:49 -- scripts/common.sh@367 -- # return 0 00:04:46.601 04:19:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.601 04:19:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:46.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.601 --rc genhtml_branch_coverage=1 00:04:46.601 --rc genhtml_function_coverage=1 00:04:46.601 --rc genhtml_legend=1 00:04:46.601 --rc geninfo_all_blocks=1 00:04:46.601 --rc geninfo_unexecuted_blocks=1 00:04:46.601 00:04:46.601 ' 00:04:46.601 04:19:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:46.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.601 --rc genhtml_branch_coverage=1 00:04:46.601 --rc genhtml_function_coverage=1 00:04:46.601 --rc genhtml_legend=1 00:04:46.601 --rc geninfo_all_blocks=1 00:04:46.601 --rc geninfo_unexecuted_blocks=1 00:04:46.601 00:04:46.601 ' 00:04:46.601 04:19:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:46.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.601 --rc genhtml_branch_coverage=1 00:04:46.601 --rc genhtml_function_coverage=1 00:04:46.601 --rc genhtml_legend=1 00:04:46.601 --rc geninfo_all_blocks=1 00:04:46.601 --rc geninfo_unexecuted_blocks=1 00:04:46.601 00:04:46.601 ' 00:04:46.601 04:19:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:46.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.601 --rc genhtml_branch_coverage=1 00:04:46.601 --rc genhtml_function_coverage=1 00:04:46.601 --rc genhtml_legend=1 00:04:46.601 --rc geninfo_all_blocks=1 00:04:46.601 --rc geninfo_unexecuted_blocks=1 00:04:46.601 00:04:46.601 ' 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.601 04:19:49 -- nvmf/common.sh@7 -- # uname -s 00:04:46.601 04:19:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.601 04:19:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.601 04:19:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.601 04:19:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.601 04:19:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.601 04:19:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.601 04:19:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.601 04:19:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.601 04:19:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.601 04:19:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.601 04:19:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:04:46.601 04:19:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:04:46.601 04:19:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.601 04:19:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.601 04:19:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.601 04:19:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.601 04:19:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.601 04:19:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.601 04:19:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.601 04:19:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.601 04:19:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.601 04:19:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.601 04:19:49 -- paths/export.sh@5 -- # export PATH 00:04:46.601 04:19:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.601 04:19:49 -- nvmf/common.sh@46 -- # : 0 00:04:46.601 04:19:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:46.601 04:19:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:46.601 04:19:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:46.601 04:19:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.601 04:19:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.601 04:19:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:46.601 04:19:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:46.601 04:19:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:46.601 INFO: launching applications... 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:46.601 Waiting for target to run... 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54387 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54387 /var/tmp/spdk_tgt.sock 00:04:46.601 04:19:49 -- common/autotest_common.sh@829 -- # '[' -z 54387 ']' 00:04:46.601 04:19:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.601 04:19:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.601 04:19:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.601 04:19:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.601 04:19:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.601 04:19:49 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.860 [2024-12-07 04:19:49.876240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.860 [2024-12-07 04:19:49.877262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54387 ] 00:04:47.119 [2024-12-07 04:19:50.182485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.119 [2024-12-07 04:19:50.220103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.119 [2024-12-07 04:19:50.220496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.687 04:19:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.687 04:19:50 -- common/autotest_common.sh@862 -- # return 0 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:47.687 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:47.687 INFO: shutting down applications... 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54387 ]] 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54387 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54387 00:04:47.687 04:19:50 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54387 00:04:48.255 SPDK target shutdown done 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:48.255 Success 00:04:48.255 04:19:51 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:48.255 ************************************ 00:04:48.255 END TEST json_config_extra_key 00:04:48.255 ************************************ 00:04:48.255 00:04:48.255 real 0m1.747s 00:04:48.255 user 0m1.626s 00:04:48.255 sys 0m0.322s 00:04:48.255 04:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.255 04:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.255 04:19:51 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.255 04:19:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.255 04:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.255 04:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.255 ************************************ 00:04:48.255 START TEST alias_rpc 00:04:48.255 ************************************ 00:04:48.255 04:19:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.513 * Looking for test storage... 00:04:48.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.513 04:19:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:48.513 04:19:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.513 04:19:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:48.513 04:19:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.513 04:19:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.513 04:19:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.513 04:19:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.513 04:19:51 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.513 04:19:51 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.513 04:19:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.513 04:19:51 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.513 04:19:51 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.513 04:19:51 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.513 04:19:51 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.513 04:19:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.513 04:19:51 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.513 04:19:51 -- scripts/common.sh@344 -- # : 1 00:04:48.513 04:19:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.513 04:19:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.513 04:19:51 -- scripts/common.sh@364 -- # decimal 1 00:04:48.513 04:19:51 -- scripts/common.sh@352 -- # local d=1 00:04:48.513 04:19:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.513 04:19:51 -- scripts/common.sh@354 -- # echo 1 00:04:48.513 04:19:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:48.513 04:19:51 -- scripts/common.sh@365 -- # decimal 2 00:04:48.513 04:19:51 -- scripts/common.sh@352 -- # local d=2 00:04:48.513 04:19:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.513 04:19:51 -- scripts/common.sh@354 -- # echo 2 00:04:48.513 04:19:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:48.513 04:19:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:48.513 04:19:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:48.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.513 04:19:51 -- scripts/common.sh@367 -- # return 0 00:04:48.513 04:19:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.513 04:19:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.513 --rc genhtml_branch_coverage=1 00:04:48.513 --rc genhtml_function_coverage=1 00:04:48.513 --rc genhtml_legend=1 00:04:48.513 --rc geninfo_all_blocks=1 00:04:48.513 --rc geninfo_unexecuted_blocks=1 00:04:48.513 00:04:48.513 ' 00:04:48.513 04:19:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.513 --rc genhtml_branch_coverage=1 00:04:48.513 --rc genhtml_function_coverage=1 00:04:48.513 --rc genhtml_legend=1 00:04:48.513 --rc geninfo_all_blocks=1 00:04:48.513 --rc geninfo_unexecuted_blocks=1 00:04:48.513 00:04:48.513 ' 00:04:48.513 04:19:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.513 --rc genhtml_branch_coverage=1 00:04:48.513 --rc genhtml_function_coverage=1 00:04:48.513 --rc genhtml_legend=1 00:04:48.513 --rc geninfo_all_blocks=1 00:04:48.513 --rc geninfo_unexecuted_blocks=1 00:04:48.513 00:04:48.513 ' 00:04:48.513 04:19:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:48.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.513 --rc genhtml_branch_coverage=1 00:04:48.513 --rc genhtml_function_coverage=1 00:04:48.513 --rc genhtml_legend=1 00:04:48.513 --rc geninfo_all_blocks=1 00:04:48.513 --rc geninfo_unexecuted_blocks=1 00:04:48.513 00:04:48.513 ' 00:04:48.513 04:19:51 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.513 04:19:51 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54458 00:04:48.513 04:19:51 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54458 00:04:48.513 04:19:51 -- common/autotest_common.sh@829 -- # '[' -z 54458 ']' 00:04:48.513 04:19:51 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.513 04:19:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.513 04:19:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.513 04:19:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.513 04:19:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.513 04:19:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.513 [2024-12-07 04:19:51.684822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:48.513 [2024-12-07 04:19:51.685085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54458 ] 00:04:48.771 [2024-12-07 04:19:51.819109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.771 [2024-12-07 04:19:51.872750] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:48.771 [2024-12-07 04:19:51.873155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.705 04:19:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.705 04:19:52 -- common/autotest_common.sh@862 -- # return 0 00:04:49.705 04:19:52 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.705 04:19:52 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54458 00:04:49.705 04:19:52 -- common/autotest_common.sh@936 -- # '[' -z 54458 ']' 00:04:49.705 04:19:52 -- common/autotest_common.sh@940 -- # kill -0 54458 00:04:49.705 04:19:52 -- common/autotest_common.sh@941 -- # uname 00:04:49.705 04:19:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.705 04:19:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54458 00:04:49.965 killing process with pid 54458 00:04:49.965 04:19:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.965 04:19:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.965 04:19:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54458' 00:04:49.965 04:19:52 -- common/autotest_common.sh@955 -- # kill 54458 00:04:49.965 04:19:52 -- common/autotest_common.sh@960 -- # wait 54458 00:04:50.224 ************************************ 00:04:50.224 END TEST alias_rpc 00:04:50.224 ************************************ 00:04:50.224 00:04:50.224 real 0m1.784s 00:04:50.224 user 0m2.120s 00:04:50.224 sys 0m0.331s 00:04:50.224 04:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.224 04:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.224 04:19:53 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:50.224 04:19:53 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:50.224 04:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.224 04:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.224 04:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.224 ************************************ 00:04:50.224 START TEST spdkcli_tcp 00:04:50.224 ************************************ 00:04:50.224 04:19:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:50.224 * Looking for test storage... 00:04:50.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:50.224 04:19:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.224 04:19:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.224 04:19:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.224 04:19:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.224 04:19:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.224 04:19:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.224 04:19:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.224 04:19:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.224 04:19:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.224 04:19:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.224 04:19:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.224 04:19:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.224 04:19:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.224 04:19:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.224 04:19:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.224 04:19:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.224 04:19:53 -- scripts/common.sh@344 -- # : 1 00:04:50.224 04:19:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.224 04:19:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.224 04:19:53 -- scripts/common.sh@364 -- # decimal 1 00:04:50.224 04:19:53 -- scripts/common.sh@352 -- # local d=1 00:04:50.224 04:19:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.224 04:19:53 -- scripts/common.sh@354 -- # echo 1 00:04:50.224 04:19:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.224 04:19:53 -- scripts/common.sh@365 -- # decimal 2 00:04:50.224 04:19:53 -- scripts/common.sh@352 -- # local d=2 00:04:50.224 04:19:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.224 04:19:53 -- scripts/common.sh@354 -- # echo 2 00:04:50.224 04:19:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.224 04:19:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.224 04:19:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.224 04:19:53 -- scripts/common.sh@367 -- # return 0 00:04:50.224 04:19:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.224 04:19:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.224 --rc genhtml_branch_coverage=1 00:04:50.224 --rc genhtml_function_coverage=1 00:04:50.224 --rc genhtml_legend=1 00:04:50.224 --rc geninfo_all_blocks=1 00:04:50.224 --rc geninfo_unexecuted_blocks=1 00:04:50.224 00:04:50.224 ' 00:04:50.224 04:19:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.224 --rc genhtml_branch_coverage=1 00:04:50.224 --rc genhtml_function_coverage=1 00:04:50.224 --rc genhtml_legend=1 00:04:50.224 --rc geninfo_all_blocks=1 00:04:50.224 --rc geninfo_unexecuted_blocks=1 00:04:50.224 00:04:50.224 ' 00:04:50.224 04:19:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.224 --rc genhtml_branch_coverage=1 00:04:50.224 --rc genhtml_function_coverage=1 00:04:50.224 --rc genhtml_legend=1 00:04:50.224 --rc geninfo_all_blocks=1 00:04:50.224 --rc geninfo_unexecuted_blocks=1 00:04:50.224 00:04:50.224 ' 00:04:50.224 04:19:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.224 --rc genhtml_branch_coverage=1 00:04:50.224 --rc genhtml_function_coverage=1 00:04:50.224 --rc genhtml_legend=1 00:04:50.224 --rc geninfo_all_blocks=1 00:04:50.224 --rc geninfo_unexecuted_blocks=1 00:04:50.224 00:04:50.224 ' 00:04:50.224 04:19:53 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:50.224 04:19:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:50.224 04:19:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:50.224 04:19:53 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:50.224 04:19:53 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:50.224 04:19:53 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:50.224 04:19:53 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:50.224 04:19:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:50.224 04:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.484 04:19:53 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54536 00:04:50.484 04:19:53 -- spdkcli/tcp.sh@27 -- # waitforlisten 54536 00:04:50.484 04:19:53 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:50.484 04:19:53 -- common/autotest_common.sh@829 -- # '[' -z 54536 ']' 00:04:50.484 04:19:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.484 04:19:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.484 04:19:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.484 04:19:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.484 04:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:50.484 [2024-12-07 04:19:53.525256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.484 [2024-12-07 04:19:53.525350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54536 ] 00:04:50.484 [2024-12-07 04:19:53.662658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.484 [2024-12-07 04:19:53.717566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.484 [2024-12-07 04:19:53.717969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.484 [2024-12-07 04:19:53.717981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.422 04:19:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.422 04:19:54 -- common/autotest_common.sh@862 -- # return 0 00:04:51.422 04:19:54 -- spdkcli/tcp.sh@31 -- # socat_pid=54553 00:04:51.422 04:19:54 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:51.422 04:19:54 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:51.682 [ 00:04:51.682 "bdev_malloc_delete", 00:04:51.682 "bdev_malloc_create", 00:04:51.682 "bdev_null_resize", 00:04:51.682 "bdev_null_delete", 00:04:51.682 "bdev_null_create", 00:04:51.682 "bdev_nvme_cuse_unregister", 00:04:51.682 "bdev_nvme_cuse_register", 00:04:51.682 "bdev_opal_new_user", 00:04:51.682 "bdev_opal_set_lock_state", 00:04:51.682 "bdev_opal_delete", 00:04:51.682 "bdev_opal_get_info", 00:04:51.682 "bdev_opal_create", 00:04:51.682 "bdev_nvme_opal_revert", 00:04:51.682 "bdev_nvme_opal_init", 00:04:51.682 "bdev_nvme_send_cmd", 00:04:51.682 "bdev_nvme_get_path_iostat", 00:04:51.682 "bdev_nvme_get_mdns_discovery_info", 00:04:51.682 "bdev_nvme_stop_mdns_discovery", 00:04:51.682 "bdev_nvme_start_mdns_discovery", 00:04:51.682 "bdev_nvme_set_multipath_policy", 00:04:51.682 "bdev_nvme_set_preferred_path", 00:04:51.682 "bdev_nvme_get_io_paths", 00:04:51.682 "bdev_nvme_remove_error_injection", 00:04:51.682 "bdev_nvme_add_error_injection", 00:04:51.682 "bdev_nvme_get_discovery_info", 00:04:51.682 "bdev_nvme_stop_discovery", 00:04:51.682 "bdev_nvme_start_discovery", 00:04:51.682 "bdev_nvme_get_controller_health_info", 00:04:51.682 "bdev_nvme_disable_controller", 00:04:51.682 "bdev_nvme_enable_controller", 00:04:51.682 "bdev_nvme_reset_controller", 00:04:51.682 "bdev_nvme_get_transport_statistics", 00:04:51.682 "bdev_nvme_apply_firmware", 00:04:51.682 "bdev_nvme_detach_controller", 00:04:51.682 "bdev_nvme_get_controllers", 00:04:51.682 "bdev_nvme_attach_controller", 00:04:51.682 "bdev_nvme_set_hotplug", 00:04:51.682 "bdev_nvme_set_options", 00:04:51.682 "bdev_passthru_delete", 00:04:51.682 "bdev_passthru_create", 00:04:51.682 "bdev_lvol_grow_lvstore", 00:04:51.682 "bdev_lvol_get_lvols", 00:04:51.682 "bdev_lvol_get_lvstores", 00:04:51.682 "bdev_lvol_delete", 00:04:51.682 "bdev_lvol_set_read_only", 00:04:51.682 "bdev_lvol_resize", 00:04:51.682 "bdev_lvol_decouple_parent", 00:04:51.682 "bdev_lvol_inflate", 00:04:51.682 "bdev_lvol_rename", 00:04:51.682 "bdev_lvol_clone_bdev", 00:04:51.682 "bdev_lvol_clone", 00:04:51.682 "bdev_lvol_snapshot", 00:04:51.682 "bdev_lvol_create", 00:04:51.682 "bdev_lvol_delete_lvstore", 00:04:51.682 "bdev_lvol_rename_lvstore", 00:04:51.682 "bdev_lvol_create_lvstore", 00:04:51.682 "bdev_raid_set_options", 00:04:51.682 "bdev_raid_remove_base_bdev", 00:04:51.682 "bdev_raid_add_base_bdev", 00:04:51.682 "bdev_raid_delete", 00:04:51.682 "bdev_raid_create", 00:04:51.682 "bdev_raid_get_bdevs", 00:04:51.682 "bdev_error_inject_error", 00:04:51.682 "bdev_error_delete", 00:04:51.682 "bdev_error_create", 00:04:51.682 "bdev_split_delete", 00:04:51.682 "bdev_split_create", 00:04:51.682 "bdev_delay_delete", 00:04:51.682 "bdev_delay_create", 00:04:51.682 "bdev_delay_update_latency", 00:04:51.682 "bdev_zone_block_delete", 00:04:51.682 "bdev_zone_block_create", 00:04:51.682 "blobfs_create", 00:04:51.682 "blobfs_detect", 00:04:51.682 "blobfs_set_cache_size", 00:04:51.682 "bdev_aio_delete", 00:04:51.682 "bdev_aio_rescan", 00:04:51.682 "bdev_aio_create", 00:04:51.682 "bdev_ftl_set_property", 00:04:51.682 "bdev_ftl_get_properties", 00:04:51.682 "bdev_ftl_get_stats", 00:04:51.682 "bdev_ftl_unmap", 00:04:51.682 "bdev_ftl_unload", 00:04:51.682 "bdev_ftl_delete", 00:04:51.682 "bdev_ftl_load", 00:04:51.682 "bdev_ftl_create", 00:04:51.682 "bdev_virtio_attach_controller", 00:04:51.682 "bdev_virtio_scsi_get_devices", 00:04:51.682 "bdev_virtio_detach_controller", 00:04:51.682 "bdev_virtio_blk_set_hotplug", 00:04:51.682 "bdev_iscsi_delete", 00:04:51.682 "bdev_iscsi_create", 00:04:51.682 "bdev_iscsi_set_options", 00:04:51.682 "bdev_uring_delete", 00:04:51.682 "bdev_uring_create", 00:04:51.682 "accel_error_inject_error", 00:04:51.682 "ioat_scan_accel_module", 00:04:51.682 "dsa_scan_accel_module", 00:04:51.682 "iaa_scan_accel_module", 00:04:51.682 "vfu_virtio_create_scsi_endpoint", 00:04:51.682 "vfu_virtio_scsi_remove_target", 00:04:51.682 "vfu_virtio_scsi_add_target", 00:04:51.682 "vfu_virtio_create_blk_endpoint", 00:04:51.682 "vfu_virtio_delete_endpoint", 00:04:51.682 "iscsi_set_options", 00:04:51.682 "iscsi_get_auth_groups", 00:04:51.682 "iscsi_auth_group_remove_secret", 00:04:51.682 "iscsi_auth_group_add_secret", 00:04:51.682 "iscsi_delete_auth_group", 00:04:51.682 "iscsi_create_auth_group", 00:04:51.682 "iscsi_set_discovery_auth", 00:04:51.682 "iscsi_get_options", 00:04:51.682 "iscsi_target_node_request_logout", 00:04:51.682 "iscsi_target_node_set_redirect", 00:04:51.682 "iscsi_target_node_set_auth", 00:04:51.682 "iscsi_target_node_add_lun", 00:04:51.682 "iscsi_get_connections", 00:04:51.682 "iscsi_portal_group_set_auth", 00:04:51.682 "iscsi_start_portal_group", 00:04:51.682 "iscsi_delete_portal_group", 00:04:51.682 "iscsi_create_portal_group", 00:04:51.682 "iscsi_get_portal_groups", 00:04:51.682 "iscsi_delete_target_node", 00:04:51.682 "iscsi_target_node_remove_pg_ig_maps", 00:04:51.682 "iscsi_target_node_add_pg_ig_maps", 00:04:51.682 "iscsi_create_target_node", 00:04:51.682 "iscsi_get_target_nodes", 00:04:51.682 "iscsi_delete_initiator_group", 00:04:51.682 "iscsi_initiator_group_remove_initiators", 00:04:51.682 "iscsi_initiator_group_add_initiators", 00:04:51.682 "iscsi_create_initiator_group", 00:04:51.682 "iscsi_get_initiator_groups", 00:04:51.682 "nvmf_set_crdt", 00:04:51.682 "nvmf_set_config", 00:04:51.682 "nvmf_set_max_subsystems", 00:04:51.682 "nvmf_subsystem_get_listeners", 00:04:51.682 "nvmf_subsystem_get_qpairs", 00:04:51.682 "nvmf_subsystem_get_controllers", 00:04:51.682 "nvmf_get_stats", 00:04:51.682 "nvmf_get_transports", 00:04:51.682 "nvmf_create_transport", 00:04:51.682 "nvmf_get_targets", 00:04:51.682 "nvmf_delete_target", 00:04:51.682 "nvmf_create_target", 00:04:51.682 "nvmf_subsystem_allow_any_host", 00:04:51.682 "nvmf_subsystem_remove_host", 00:04:51.682 "nvmf_subsystem_add_host", 00:04:51.682 "nvmf_subsystem_remove_ns", 00:04:51.682 "nvmf_subsystem_add_ns", 00:04:51.682 "nvmf_subsystem_listener_set_ana_state", 00:04:51.682 "nvmf_discovery_get_referrals", 00:04:51.682 "nvmf_discovery_remove_referral", 00:04:51.682 "nvmf_discovery_add_referral", 00:04:51.682 "nvmf_subsystem_remove_listener", 00:04:51.682 "nvmf_subsystem_add_listener", 00:04:51.682 "nvmf_delete_subsystem", 00:04:51.682 "nvmf_create_subsystem", 00:04:51.682 "nvmf_get_subsystems", 00:04:51.682 "env_dpdk_get_mem_stats", 00:04:51.682 "nbd_get_disks", 00:04:51.682 "nbd_stop_disk", 00:04:51.682 "nbd_start_disk", 00:04:51.682 "ublk_recover_disk", 00:04:51.682 "ublk_get_disks", 00:04:51.682 "ublk_stop_disk", 00:04:51.682 "ublk_start_disk", 00:04:51.682 "ublk_destroy_target", 00:04:51.682 "ublk_create_target", 00:04:51.682 "virtio_blk_create_transport", 00:04:51.682 "virtio_blk_get_transports", 00:04:51.682 "vhost_controller_set_coalescing", 00:04:51.682 "vhost_get_controllers", 00:04:51.682 "vhost_delete_controller", 00:04:51.682 "vhost_create_blk_controller", 00:04:51.682 "vhost_scsi_controller_remove_target", 00:04:51.682 "vhost_scsi_controller_add_target", 00:04:51.682 "vhost_start_scsi_controller", 00:04:51.682 "vhost_create_scsi_controller", 00:04:51.682 "thread_set_cpumask", 00:04:51.682 "framework_get_scheduler", 00:04:51.682 "framework_set_scheduler", 00:04:51.682 "framework_get_reactors", 00:04:51.682 "thread_get_io_channels", 00:04:51.682 "thread_get_pollers", 00:04:51.682 "thread_get_stats", 00:04:51.682 "framework_monitor_context_switch", 00:04:51.682 "spdk_kill_instance", 00:04:51.682 "log_enable_timestamps", 00:04:51.682 "log_get_flags", 00:04:51.682 "log_clear_flag", 00:04:51.682 "log_set_flag", 00:04:51.682 "log_get_level", 00:04:51.682 "log_set_level", 00:04:51.682 "log_get_print_level", 00:04:51.682 "log_set_print_level", 00:04:51.682 "framework_enable_cpumask_locks", 00:04:51.682 "framework_disable_cpumask_locks", 00:04:51.682 "framework_wait_init", 00:04:51.682 "framework_start_init", 00:04:51.682 "scsi_get_devices", 00:04:51.682 "bdev_get_histogram", 00:04:51.682 "bdev_enable_histogram", 00:04:51.682 "bdev_set_qos_limit", 00:04:51.682 "bdev_set_qd_sampling_period", 00:04:51.682 "bdev_get_bdevs", 00:04:51.683 "bdev_reset_iostat", 00:04:51.683 "bdev_get_iostat", 00:04:51.683 "bdev_examine", 00:04:51.683 "bdev_wait_for_examine", 00:04:51.683 "bdev_set_options", 00:04:51.683 "notify_get_notifications", 00:04:51.683 "notify_get_types", 00:04:51.683 "accel_get_stats", 00:04:51.683 "accel_set_options", 00:04:51.683 "accel_set_driver", 00:04:51.683 "accel_crypto_key_destroy", 00:04:51.683 "accel_crypto_keys_get", 00:04:51.683 "accel_crypto_key_create", 00:04:51.683 "accel_assign_opc", 00:04:51.683 "accel_get_module_info", 00:04:51.683 "accel_get_opc_assignments", 00:04:51.683 "vmd_rescan", 00:04:51.683 "vmd_remove_device", 00:04:51.683 "vmd_enable", 00:04:51.683 "sock_set_default_impl", 00:04:51.683 "sock_impl_set_options", 00:04:51.683 "sock_impl_get_options", 00:04:51.683 "iobuf_get_stats", 00:04:51.683 "iobuf_set_options", 00:04:51.683 "framework_get_pci_devices", 00:04:51.683 "framework_get_config", 00:04:51.683 "framework_get_subsystems", 00:04:51.683 "vfu_tgt_set_base_path", 00:04:51.683 "trace_get_info", 00:04:51.683 "trace_get_tpoint_group_mask", 00:04:51.683 "trace_disable_tpoint_group", 00:04:51.683 "trace_enable_tpoint_group", 00:04:51.683 "trace_clear_tpoint_mask", 00:04:51.683 "trace_set_tpoint_mask", 00:04:51.683 "spdk_get_version", 00:04:51.683 "rpc_get_methods" 00:04:51.683 ] 00:04:51.683 04:19:54 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:51.683 04:19:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.683 04:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.683 04:19:54 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:51.683 04:19:54 -- spdkcli/tcp.sh@38 -- # killprocess 54536 00:04:51.683 04:19:54 -- common/autotest_common.sh@936 -- # '[' -z 54536 ']' 00:04:51.683 04:19:54 -- common/autotest_common.sh@940 -- # kill -0 54536 00:04:51.683 04:19:54 -- common/autotest_common.sh@941 -- # uname 00:04:51.683 04:19:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.683 04:19:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54536 00:04:51.683 killing process with pid 54536 00:04:51.683 04:19:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:51.683 04:19:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:51.683 04:19:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54536' 00:04:51.683 04:19:54 -- common/autotest_common.sh@955 -- # kill 54536 00:04:51.683 04:19:54 -- common/autotest_common.sh@960 -- # wait 54536 00:04:51.942 ************************************ 00:04:51.942 END TEST spdkcli_tcp 00:04:51.942 ************************************ 00:04:51.942 00:04:51.942 real 0m1.738s 00:04:51.942 user 0m3.224s 00:04:51.942 sys 0m0.350s 00:04:51.942 04:19:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.942 04:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:51.942 04:19:55 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.942 04:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.942 04:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.942 04:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:51.942 ************************************ 00:04:51.942 START TEST dpdk_mem_utility 00:04:51.942 ************************************ 00:04:51.942 04:19:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.942 * Looking for test storage... 00:04:51.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:51.942 04:19:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.942 04:19:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.942 04:19:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.201 04:19:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.201 04:19:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.201 04:19:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.201 04:19:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.201 04:19:55 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.201 04:19:55 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.201 04:19:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.201 04:19:55 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.201 04:19:55 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.201 04:19:55 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.201 04:19:55 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.201 04:19:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.201 04:19:55 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.201 04:19:55 -- scripts/common.sh@344 -- # : 1 00:04:52.201 04:19:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.201 04:19:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.201 04:19:55 -- scripts/common.sh@364 -- # decimal 1 00:04:52.201 04:19:55 -- scripts/common.sh@352 -- # local d=1 00:04:52.201 04:19:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.201 04:19:55 -- scripts/common.sh@354 -- # echo 1 00:04:52.201 04:19:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.201 04:19:55 -- scripts/common.sh@365 -- # decimal 2 00:04:52.201 04:19:55 -- scripts/common.sh@352 -- # local d=2 00:04:52.201 04:19:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.201 04:19:55 -- scripts/common.sh@354 -- # echo 2 00:04:52.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.201 04:19:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.201 04:19:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.201 04:19:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.201 04:19:55 -- scripts/common.sh@367 -- # return 0 00:04:52.201 04:19:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.201 04:19:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.201 --rc genhtml_branch_coverage=1 00:04:52.201 --rc genhtml_function_coverage=1 00:04:52.201 --rc genhtml_legend=1 00:04:52.201 --rc geninfo_all_blocks=1 00:04:52.201 --rc geninfo_unexecuted_blocks=1 00:04:52.201 00:04:52.201 ' 00:04:52.201 04:19:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.201 --rc genhtml_branch_coverage=1 00:04:52.201 --rc genhtml_function_coverage=1 00:04:52.201 --rc genhtml_legend=1 00:04:52.201 --rc geninfo_all_blocks=1 00:04:52.201 --rc geninfo_unexecuted_blocks=1 00:04:52.201 00:04:52.201 ' 00:04:52.201 04:19:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.201 --rc genhtml_branch_coverage=1 00:04:52.201 --rc genhtml_function_coverage=1 00:04:52.201 --rc genhtml_legend=1 00:04:52.201 --rc geninfo_all_blocks=1 00:04:52.201 --rc geninfo_unexecuted_blocks=1 00:04:52.201 00:04:52.201 ' 00:04:52.201 04:19:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.201 --rc genhtml_branch_coverage=1 00:04:52.201 --rc genhtml_function_coverage=1 00:04:52.201 --rc genhtml_legend=1 00:04:52.201 --rc geninfo_all_blocks=1 00:04:52.201 --rc geninfo_unexecuted_blocks=1 00:04:52.201 00:04:52.201 ' 00:04:52.201 04:19:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:52.201 04:19:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54634 00:04:52.201 04:19:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54634 00:04:52.201 04:19:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.201 04:19:55 -- common/autotest_common.sh@829 -- # '[' -z 54634 ']' 00:04:52.201 04:19:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.201 04:19:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.201 04:19:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.201 04:19:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.201 04:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:52.201 [2024-12-07 04:19:55.317237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.201 [2024-12-07 04:19:55.317329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54634 ] 00:04:52.460 [2024-12-07 04:19:55.449124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.460 [2024-12-07 04:19:55.500216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.460 [2024-12-07 04:19:55.500601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.398 04:19:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.398 04:19:56 -- common/autotest_common.sh@862 -- # return 0 00:04:53.398 04:19:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:53.398 04:19:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:53.398 04:19:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.398 04:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:53.398 { 00:04:53.398 "filename": "/tmp/spdk_mem_dump.txt" 00:04:53.398 } 00:04:53.398 04:19:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.398 04:19:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:53.398 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:53.398 1 heaps totaling size 814.000000 MiB 00:04:53.398 size: 814.000000 MiB heap id: 0 00:04:53.398 end heaps---------- 00:04:53.398 8 mempools totaling size 598.116089 MiB 00:04:53.398 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:53.398 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:53.398 size: 84.521057 MiB name: bdev_io_54634 00:04:53.398 size: 51.011292 MiB name: evtpool_54634 00:04:53.398 size: 50.003479 MiB name: msgpool_54634 00:04:53.398 size: 21.763794 MiB name: PDU_Pool 00:04:53.398 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:53.398 size: 0.026123 MiB name: Session_Pool 00:04:53.398 end mempools------- 00:04:53.398 6 memzones totaling size 4.142822 MiB 00:04:53.398 size: 1.000366 MiB name: RG_ring_0_54634 00:04:53.398 size: 1.000366 MiB name: RG_ring_1_54634 00:04:53.398 size: 1.000366 MiB name: RG_ring_4_54634 00:04:53.398 size: 1.000366 MiB name: RG_ring_5_54634 00:04:53.398 size: 0.125366 MiB name: RG_ring_2_54634 00:04:53.398 size: 0.015991 MiB name: RG_ring_3_54634 00:04:53.398 end memzones------- 00:04:53.398 04:19:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:53.398 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:04:53.398 list of free elements. size: 12.471375 MiB 00:04:53.398 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:53.398 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:53.398 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:53.398 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:53.398 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:53.398 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:53.398 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:53.398 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:53.398 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:53.398 element at address: 0x20001aa00000 with size: 0.568970 MiB 00:04:53.398 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:53.398 element at address: 0x200000800000 with size: 0.486328 MiB 00:04:53.398 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:53.398 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:53.398 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:53.398 list of standard malloc elements. size: 199.266052 MiB 00:04:53.398 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:53.398 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:53.398 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:53.398 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:53.398 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:53.398 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:53.398 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:53.398 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:53.398 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:53.398 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:53.398 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:53.398 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:53.399 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:53.399 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:53.400 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:53.400 list of memzone associated elements. size: 602.262573 MiB 00:04:53.400 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:53.400 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:53.400 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:53.400 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:53.400 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:53.400 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54634_0 00:04:53.400 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:53.400 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54634_0 00:04:53.400 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:53.400 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54634_0 00:04:53.400 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:53.400 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:53.400 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:53.400 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:53.400 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:53.400 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54634 00:04:53.400 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:53.400 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54634 00:04:53.400 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:53.400 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54634 00:04:53.400 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:53.400 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:53.400 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:53.400 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:53.400 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:53.400 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:53.400 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:53.400 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:53.400 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:53.400 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54634 00:04:53.400 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:53.400 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54634 00:04:53.400 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:53.400 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54634 00:04:53.400 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:53.400 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54634 00:04:53.400 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:53.400 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54634 00:04:53.400 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:53.400 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:53.400 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:53.400 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:53.400 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:53.400 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:53.400 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:53.400 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54634 00:04:53.400 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:53.400 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:53.400 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:53.400 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:53.400 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:53.400 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54634 00:04:53.400 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:53.400 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:53.400 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:53.400 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54634 00:04:53.400 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:53.400 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54634 00:04:53.400 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:53.400 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:53.400 04:19:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:53.400 04:19:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54634 00:04:53.400 04:19:56 -- common/autotest_common.sh@936 -- # '[' -z 54634 ']' 00:04:53.400 04:19:56 -- common/autotest_common.sh@940 -- # kill -0 54634 00:04:53.400 04:19:56 -- common/autotest_common.sh@941 -- # uname 00:04:53.400 04:19:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.400 04:19:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54634 00:04:53.400 killing process with pid 54634 00:04:53.400 04:19:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.400 04:19:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.400 04:19:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54634' 00:04:53.400 04:19:56 -- common/autotest_common.sh@955 -- # kill 54634 00:04:53.400 04:19:56 -- common/autotest_common.sh@960 -- # wait 54634 00:04:53.660 ************************************ 00:04:53.660 END TEST dpdk_mem_utility 00:04:53.660 ************************************ 00:04:53.660 00:04:53.660 real 0m1.676s 00:04:53.660 user 0m1.906s 00:04:53.660 sys 0m0.338s 00:04:53.660 04:19:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.660 04:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:53.660 04:19:56 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.660 04:19:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.660 04:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.660 04:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:53.660 ************************************ 00:04:53.660 START TEST event 00:04:53.660 ************************************ 00:04:53.660 04:19:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.660 * Looking for test storage... 00:04:53.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:53.660 04:19:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.660 04:19:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.660 04:19:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.919 04:19:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.919 04:19:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.919 04:19:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.919 04:19:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.919 04:19:56 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.919 04:19:56 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.919 04:19:56 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.919 04:19:56 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.919 04:19:56 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.919 04:19:56 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.919 04:19:56 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.919 04:19:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.919 04:19:56 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.919 04:19:56 -- scripts/common.sh@344 -- # : 1 00:04:53.919 04:19:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.919 04:19:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.919 04:19:56 -- scripts/common.sh@364 -- # decimal 1 00:04:53.919 04:19:56 -- scripts/common.sh@352 -- # local d=1 00:04:53.919 04:19:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.919 04:19:56 -- scripts/common.sh@354 -- # echo 1 00:04:53.919 04:19:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.919 04:19:56 -- scripts/common.sh@365 -- # decimal 2 00:04:53.919 04:19:56 -- scripts/common.sh@352 -- # local d=2 00:04:53.919 04:19:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.919 04:19:56 -- scripts/common.sh@354 -- # echo 2 00:04:53.919 04:19:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.919 04:19:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.919 04:19:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.919 04:19:56 -- scripts/common.sh@367 -- # return 0 00:04:53.919 04:19:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.919 04:19:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.919 --rc genhtml_branch_coverage=1 00:04:53.919 --rc genhtml_function_coverage=1 00:04:53.919 --rc genhtml_legend=1 00:04:53.919 --rc geninfo_all_blocks=1 00:04:53.919 --rc geninfo_unexecuted_blocks=1 00:04:53.920 00:04:53.920 ' 00:04:53.920 04:19:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.920 --rc genhtml_branch_coverage=1 00:04:53.920 --rc genhtml_function_coverage=1 00:04:53.920 --rc genhtml_legend=1 00:04:53.920 --rc geninfo_all_blocks=1 00:04:53.920 --rc geninfo_unexecuted_blocks=1 00:04:53.920 00:04:53.920 ' 00:04:53.920 04:19:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.920 --rc genhtml_branch_coverage=1 00:04:53.920 --rc genhtml_function_coverage=1 00:04:53.920 --rc genhtml_legend=1 00:04:53.920 --rc geninfo_all_blocks=1 00:04:53.920 --rc geninfo_unexecuted_blocks=1 00:04:53.920 00:04:53.920 ' 00:04:53.920 04:19:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.920 --rc genhtml_branch_coverage=1 00:04:53.920 --rc genhtml_function_coverage=1 00:04:53.920 --rc genhtml_legend=1 00:04:53.920 --rc geninfo_all_blocks=1 00:04:53.920 --rc geninfo_unexecuted_blocks=1 00:04:53.920 00:04:53.920 ' 00:04:53.920 04:19:56 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:53.920 04:19:56 -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.920 04:19:56 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.920 04:19:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:53.920 04:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.920 04:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:53.920 ************************************ 00:04:53.920 START TEST event_perf 00:04:53.920 ************************************ 00:04:53.920 04:19:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.920 Running I/O for 1 seconds...[2024-12-07 04:19:56.980706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.920 [2024-12-07 04:19:56.980930] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54718 ] 00:04:53.920 [2024-12-07 04:19:57.112752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.179 [2024-12-07 04:19:57.166107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.179 [2024-12-07 04:19:57.166210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.179 [2024-12-07 04:19:57.166347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.179 [2024-12-07 04:19:57.166350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.117 Running I/O for 1 seconds... 00:04:55.117 lcore 0: 200558 00:04:55.117 lcore 1: 200558 00:04:55.117 lcore 2: 200559 00:04:55.117 lcore 3: 200558 00:04:55.117 done. 00:04:55.117 00:04:55.117 real 0m1.284s 00:04:55.117 user 0m4.130s 00:04:55.117 sys 0m0.040s 00:04:55.117 04:19:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.117 04:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.117 ************************************ 00:04:55.117 END TEST event_perf 00:04:55.117 ************************************ 00:04:55.117 04:19:58 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.117 04:19:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:55.117 04:19:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.117 04:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.117 ************************************ 00:04:55.117 START TEST event_reactor 00:04:55.117 ************************************ 00:04:55.117 04:19:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.117 [2024-12-07 04:19:58.323915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:55.117 [2024-12-07 04:19:58.324229] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54751 ] 00:04:55.376 [2024-12-07 04:19:58.453273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.376 [2024-12-07 04:19:58.504575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.754 test_start 00:04:56.754 oneshot 00:04:56.754 tick 100 00:04:56.754 tick 100 00:04:56.754 tick 250 00:04:56.754 tick 100 00:04:56.754 tick 100 00:04:56.754 tick 100 00:04:56.754 tick 250 00:04:56.754 tick 500 00:04:56.754 tick 100 00:04:56.754 tick 100 00:04:56.754 tick 250 00:04:56.754 tick 100 00:04:56.754 tick 100 00:04:56.754 test_end 00:04:56.754 00:04:56.754 real 0m1.278s 00:04:56.754 user 0m1.138s 00:04:56.754 sys 0m0.034s 00:04:56.754 ************************************ 00:04:56.754 END TEST event_reactor 00:04:56.754 ************************************ 00:04:56.754 04:19:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.754 04:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:56.754 04:19:59 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.754 04:19:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:56.754 04:19:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.754 04:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:56.754 ************************************ 00:04:56.754 START TEST event_reactor_perf 00:04:56.754 ************************************ 00:04:56.754 04:19:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.754 [2024-12-07 04:19:59.656204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.754 [2024-12-07 04:19:59.656293] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54781 ] 00:04:56.754 [2024-12-07 04:19:59.793371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.754 [2024-12-07 04:19:59.849342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.127 test_start 00:04:58.127 test_end 00:04:58.127 Performance: 423222 events per second 00:04:58.127 00:04:58.127 real 0m1.308s 00:04:58.127 user 0m1.161s 00:04:58.127 sys 0m0.041s 00:04:58.127 04:20:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.127 ************************************ 00:04:58.127 END TEST event_reactor_perf 00:04:58.127 ************************************ 00:04:58.127 04:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.127 04:20:00 -- event/event.sh@49 -- # uname -s 00:04:58.127 04:20:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.127 04:20:00 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.127 04:20:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.127 04:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.127 04:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:58.127 ************************************ 00:04:58.127 START TEST event_scheduler 00:04:58.127 ************************************ 00:04:58.127 04:20:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.127 * Looking for test storage... 00:04:58.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:58.127 04:20:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.127 04:20:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.127 04:20:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.127 04:20:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.127 04:20:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.127 04:20:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.127 04:20:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.127 04:20:01 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.127 04:20:01 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.127 04:20:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.127 04:20:01 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.127 04:20:01 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.127 04:20:01 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.127 04:20:01 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.127 04:20:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.127 04:20:01 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.127 04:20:01 -- scripts/common.sh@344 -- # : 1 00:04:58.127 04:20:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.127 04:20:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.127 04:20:01 -- scripts/common.sh@364 -- # decimal 1 00:04:58.127 04:20:01 -- scripts/common.sh@352 -- # local d=1 00:04:58.127 04:20:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.127 04:20:01 -- scripts/common.sh@354 -- # echo 1 00:04:58.127 04:20:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.127 04:20:01 -- scripts/common.sh@365 -- # decimal 2 00:04:58.127 04:20:01 -- scripts/common.sh@352 -- # local d=2 00:04:58.127 04:20:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.127 04:20:01 -- scripts/common.sh@354 -- # echo 2 00:04:58.127 04:20:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.127 04:20:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.127 04:20:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.127 04:20:01 -- scripts/common.sh@367 -- # return 0 00:04:58.127 04:20:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.127 04:20:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.127 --rc genhtml_branch_coverage=1 00:04:58.127 --rc genhtml_function_coverage=1 00:04:58.127 --rc genhtml_legend=1 00:04:58.127 --rc geninfo_all_blocks=1 00:04:58.127 --rc geninfo_unexecuted_blocks=1 00:04:58.127 00:04:58.127 ' 00:04:58.127 04:20:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.127 --rc genhtml_branch_coverage=1 00:04:58.127 --rc genhtml_function_coverage=1 00:04:58.127 --rc genhtml_legend=1 00:04:58.127 --rc geninfo_all_blocks=1 00:04:58.127 --rc geninfo_unexecuted_blocks=1 00:04:58.127 00:04:58.127 ' 00:04:58.127 04:20:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.127 --rc genhtml_branch_coverage=1 00:04:58.127 --rc genhtml_function_coverage=1 00:04:58.127 --rc genhtml_legend=1 00:04:58.127 --rc geninfo_all_blocks=1 00:04:58.128 --rc geninfo_unexecuted_blocks=1 00:04:58.128 00:04:58.128 ' 00:04:58.128 04:20:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.128 --rc genhtml_branch_coverage=1 00:04:58.128 --rc genhtml_function_coverage=1 00:04:58.128 --rc genhtml_legend=1 00:04:58.128 --rc geninfo_all_blocks=1 00:04:58.128 --rc geninfo_unexecuted_blocks=1 00:04:58.128 00:04:58.128 ' 00:04:58.128 04:20:01 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.128 04:20:01 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54855 00:04:58.128 04:20:01 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.128 04:20:01 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.128 04:20:01 -- scheduler/scheduler.sh@37 -- # waitforlisten 54855 00:04:58.128 04:20:01 -- common/autotest_common.sh@829 -- # '[' -z 54855 ']' 00:04:58.128 04:20:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.128 04:20:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.128 04:20:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.128 04:20:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.128 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.128 [2024-12-07 04:20:01.229463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:58.128 [2024-12-07 04:20:01.229559] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54855 ] 00:04:58.386 [2024-12-07 04:20:01.368834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.386 [2024-12-07 04:20:01.440770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.386 [2024-12-07 04:20:01.440848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.386 [2024-12-07 04:20:01.440998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.386 [2024-12-07 04:20:01.441006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.386 04:20:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.386 04:20:01 -- common/autotest_common.sh@862 -- # return 0 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 POWER: Env isn't set yet! 00:04:58.386 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:58.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.386 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.386 POWER: Attempting to initialise PSTAT power management... 00:04:58.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.386 POWER: Cannot set governor of lcore 0 to performance 00:04:58.386 POWER: Attempting to initialise AMD PSTATE power management... 00:04:58.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.386 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.386 POWER: Attempting to initialise CPPC power management... 00:04:58.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.386 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.386 POWER: Attempting to initialise VM power management... 00:04:58.386 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:58.386 POWER: Unable to set Power Management Environment for lcore 0 00:04:58.386 [2024-12-07 04:20:01.485998] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:58.386 [2024-12-07 04:20:01.486013] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:58.386 [2024-12-07 04:20:01.486023] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:58.386 [2024-12-07 04:20:01.486038] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:58.386 [2024-12-07 04:20:01.486047] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:58.386 [2024-12-07 04:20:01.486056] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 [2024-12-07 04:20:01.547537] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:58.386 04:20:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.386 04:20:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 ************************************ 00:04:58.386 START TEST scheduler_create_thread 00:04:58.386 ************************************ 00:04:58.386 04:20:01 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 2 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 3 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 4 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 5 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 6 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 7 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.386 8 00:04:58.386 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.386 04:20:01 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:58.386 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.386 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 9 00:04:58.644 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 04:20:01 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:58.644 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 10 00:04:58.644 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 04:20:01 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:58.644 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 04:20:01 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:58.644 04:20:01 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:58.644 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 04:20:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.644 04:20:01 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:58.644 04:20:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.644 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:59.209 04:20:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.209 04:20:02 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:59.209 04:20:02 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:59.209 04:20:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.209 04:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:00.143 04:20:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.143 00:05:00.143 real 0m1.754s 00:05:00.143 user 0m0.011s 00:05:00.143 sys 0m0.007s 00:05:00.143 04:20:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.143 04:20:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.143 ************************************ 00:05:00.143 END TEST scheduler_create_thread 00:05:00.143 ************************************ 00:05:00.143 04:20:03 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.143 04:20:03 -- scheduler/scheduler.sh@46 -- # killprocess 54855 00:05:00.143 04:20:03 -- common/autotest_common.sh@936 -- # '[' -z 54855 ']' 00:05:00.143 04:20:03 -- common/autotest_common.sh@940 -- # kill -0 54855 00:05:00.143 04:20:03 -- common/autotest_common.sh@941 -- # uname 00:05:00.143 04:20:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:00.143 04:20:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54855 00:05:00.402 04:20:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:00.402 killing process with pid 54855 00:05:00.402 04:20:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:00.402 04:20:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54855' 00:05:00.402 04:20:03 -- common/autotest_common.sh@955 -- # kill 54855 00:05:00.402 04:20:03 -- common/autotest_common.sh@960 -- # wait 54855 00:05:00.660 [2024-12-07 04:20:03.789531] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.919 00:05:00.919 real 0m2.966s 00:05:00.919 user 0m3.671s 00:05:00.919 sys 0m0.302s 00:05:00.919 04:20:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.919 04:20:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.919 ************************************ 00:05:00.919 END TEST event_scheduler 00:05:00.919 ************************************ 00:05:00.919 04:20:04 -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.919 04:20:04 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.919 04:20:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.919 04:20:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.919 04:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:00.919 ************************************ 00:05:00.919 START TEST app_repeat 00:05:00.919 ************************************ 00:05:00.919 04:20:04 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:00.919 04:20:04 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.919 04:20:04 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.919 04:20:04 -- event/event.sh@13 -- # local nbd_list 00:05:00.919 04:20:04 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.919 04:20:04 -- event/event.sh@14 -- # local bdev_list 00:05:00.920 04:20:04 -- event/event.sh@15 -- # local repeat_times=4 00:05:00.920 04:20:04 -- event/event.sh@17 -- # modprobe nbd 00:05:00.920 04:20:04 -- event/event.sh@19 -- # repeat_pid=54931 00:05:00.920 04:20:04 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.920 04:20:04 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.920 Process app_repeat pid: 54931 00:05:00.920 04:20:04 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54931' 00:05:00.920 04:20:04 -- event/event.sh@23 -- # for i in {0..2} 00:05:00.920 spdk_app_start Round 0 00:05:00.920 04:20:04 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.920 04:20:04 -- event/event.sh@25 -- # waitforlisten 54931 /var/tmp/spdk-nbd.sock 00:05:00.920 04:20:04 -- common/autotest_common.sh@829 -- # '[' -z 54931 ']' 00:05:00.920 04:20:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.920 04:20:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.920 04:20:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.920 04:20:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.920 04:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:00.920 [2024-12-07 04:20:04.048022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.920 [2024-12-07 04:20:04.048090] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54931 ] 00:05:01.179 [2024-12-07 04:20:04.180120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.179 [2024-12-07 04:20:04.231536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.179 [2024-12-07 04:20:04.231561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.179 04:20:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.179 04:20:04 -- common/autotest_common.sh@862 -- # return 0 00:05:01.179 04:20:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.437 Malloc0 00:05:01.437 04:20:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.697 Malloc1 00:05:01.697 04:20:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@12 -- # local i 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.697 04:20:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.954 /dev/nbd0 00:05:01.955 04:20:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.955 04:20:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.955 04:20:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:01.955 04:20:05 -- common/autotest_common.sh@867 -- # local i 00:05:01.955 04:20:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:01.955 04:20:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:01.955 04:20:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:01.955 04:20:05 -- common/autotest_common.sh@871 -- # break 00:05:01.955 04:20:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:01.955 04:20:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:01.955 04:20:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.955 1+0 records in 00:05:01.955 1+0 records out 00:05:01.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259777 s, 15.8 MB/s 00:05:01.955 04:20:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.955 04:20:05 -- common/autotest_common.sh@884 -- # size=4096 00:05:01.955 04:20:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.955 04:20:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:01.955 04:20:05 -- common/autotest_common.sh@887 -- # return 0 00:05:01.955 04:20:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.955 04:20:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.955 04:20:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.213 /dev/nbd1 00:05:02.213 04:20:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.213 04:20:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.213 04:20:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:02.213 04:20:05 -- common/autotest_common.sh@867 -- # local i 00:05:02.213 04:20:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:02.213 04:20:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:02.213 04:20:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:02.213 04:20:05 -- common/autotest_common.sh@871 -- # break 00:05:02.213 04:20:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:02.213 04:20:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:02.213 04:20:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.213 1+0 records in 00:05:02.213 1+0 records out 00:05:02.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251011 s, 16.3 MB/s 00:05:02.213 04:20:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.213 04:20:05 -- common/autotest_common.sh@884 -- # size=4096 00:05:02.213 04:20:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.213 04:20:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:02.213 04:20:05 -- common/autotest_common.sh@887 -- # return 0 00:05:02.213 04:20:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.213 04:20:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.213 04:20:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.213 04:20:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.213 04:20:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.472 { 00:05:02.472 "nbd_device": "/dev/nbd0", 00:05:02.472 "bdev_name": "Malloc0" 00:05:02.472 }, 00:05:02.472 { 00:05:02.472 "nbd_device": "/dev/nbd1", 00:05:02.472 "bdev_name": "Malloc1" 00:05:02.472 } 00:05:02.472 ]' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.472 { 00:05:02.472 "nbd_device": "/dev/nbd0", 00:05:02.472 "bdev_name": "Malloc0" 00:05:02.472 }, 00:05:02.472 { 00:05:02.472 "nbd_device": "/dev/nbd1", 00:05:02.472 "bdev_name": "Malloc1" 00:05:02.472 } 00:05:02.472 ]' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.472 /dev/nbd1' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.472 /dev/nbd1' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.472 256+0 records in 00:05:02.472 256+0 records out 00:05:02.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767409 s, 137 MB/s 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.472 256+0 records in 00:05:02.472 256+0 records out 00:05:02.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187004 s, 56.1 MB/s 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.472 256+0 records in 00:05:02.472 256+0 records out 00:05:02.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287836 s, 36.4 MB/s 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@51 -- # local i 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.472 04:20:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@41 -- # break 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.731 04:20:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@41 -- # break 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.990 04:20:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.249 04:20:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.249 04:20:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.249 04:20:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@65 -- # true 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.507 04:20:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.507 04:20:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.766 04:20:06 -- event/event.sh@35 -- # sleep 3 00:05:03.766 [2024-12-07 04:20:06.952481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.024 [2024-12-07 04:20:07.007223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.024 [2024-12-07 04:20:07.007235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.024 [2024-12-07 04:20:07.037194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.024 [2024-12-07 04:20:07.037280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.321 04:20:09 -- event/event.sh@23 -- # for i in {0..2} 00:05:07.321 spdk_app_start Round 1 00:05:07.321 04:20:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:07.321 04:20:09 -- event/event.sh@25 -- # waitforlisten 54931 /var/tmp/spdk-nbd.sock 00:05:07.321 04:20:09 -- common/autotest_common.sh@829 -- # '[' -z 54931 ']' 00:05:07.321 04:20:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.321 04:20:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.321 04:20:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.321 04:20:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.321 04:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.321 04:20:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.321 04:20:10 -- common/autotest_common.sh@862 -- # return 0 00:05:07.321 04:20:10 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.321 Malloc0 00:05:07.321 04:20:10 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.321 Malloc1 00:05:07.321 04:20:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@12 -- # local i 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.321 04:20:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.581 /dev/nbd0 00:05:07.581 04:20:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.581 04:20:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.581 04:20:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:07.581 04:20:10 -- common/autotest_common.sh@867 -- # local i 00:05:07.581 04:20:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.581 04:20:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.581 04:20:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:07.581 04:20:10 -- common/autotest_common.sh@871 -- # break 00:05:07.581 04:20:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.581 04:20:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.581 04:20:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.581 1+0 records in 00:05:07.581 1+0 records out 00:05:07.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030054 s, 13.6 MB/s 00:05:07.581 04:20:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.581 04:20:10 -- common/autotest_common.sh@884 -- # size=4096 00:05:07.581 04:20:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.581 04:20:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.581 04:20:10 -- common/autotest_common.sh@887 -- # return 0 00:05:07.581 04:20:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.581 04:20:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.581 04:20:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.841 /dev/nbd1 00:05:07.841 04:20:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.841 04:20:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.841 04:20:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:07.841 04:20:11 -- common/autotest_common.sh@867 -- # local i 00:05:07.841 04:20:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.841 04:20:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.841 04:20:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:07.841 04:20:11 -- common/autotest_common.sh@871 -- # break 00:05:07.841 04:20:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.841 04:20:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.841 04:20:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.841 1+0 records in 00:05:07.841 1+0 records out 00:05:07.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312853 s, 13.1 MB/s 00:05:07.841 04:20:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.841 04:20:11 -- common/autotest_common.sh@884 -- # size=4096 00:05:07.841 04:20:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.841 04:20:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.841 04:20:11 -- common/autotest_common.sh@887 -- # return 0 00:05:07.841 04:20:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.841 04:20:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.841 04:20:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.841 04:20:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.841 04:20:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.100 04:20:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.100 { 00:05:08.100 "nbd_device": "/dev/nbd0", 00:05:08.100 "bdev_name": "Malloc0" 00:05:08.100 }, 00:05:08.100 { 00:05:08.100 "nbd_device": "/dev/nbd1", 00:05:08.100 "bdev_name": "Malloc1" 00:05:08.100 } 00:05:08.100 ]' 00:05:08.100 04:20:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.100 { 00:05:08.100 "nbd_device": "/dev/nbd0", 00:05:08.100 "bdev_name": "Malloc0" 00:05:08.100 }, 00:05:08.100 { 00:05:08.100 "nbd_device": "/dev/nbd1", 00:05:08.100 "bdev_name": "Malloc1" 00:05:08.100 } 00:05:08.100 ]' 00:05:08.100 04:20:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.359 /dev/nbd1' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.359 /dev/nbd1' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.359 256+0 records in 00:05:08.359 256+0 records out 00:05:08.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720516 s, 146 MB/s 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.359 256+0 records in 00:05:08.359 256+0 records out 00:05:08.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237092 s, 44.2 MB/s 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.359 256+0 records in 00:05:08.359 256+0 records out 00:05:08.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230674 s, 45.5 MB/s 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@51 -- # local i 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.359 04:20:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@41 -- # break 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.618 04:20:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@41 -- # break 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.877 04:20:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@65 -- # true 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.136 04:20:12 -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.136 04:20:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.396 04:20:12 -- event/event.sh@35 -- # sleep 3 00:05:09.655 [2024-12-07 04:20:12.749245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.655 [2024-12-07 04:20:12.801058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.655 [2024-12-07 04:20:12.801068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.655 [2024-12-07 04:20:12.830748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.655 [2024-12-07 04:20:12.830799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.944 spdk_app_start Round 2 00:05:12.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.944 04:20:15 -- event/event.sh@23 -- # for i in {0..2} 00:05:12.944 04:20:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:12.944 04:20:15 -- event/event.sh@25 -- # waitforlisten 54931 /var/tmp/spdk-nbd.sock 00:05:12.944 04:20:15 -- common/autotest_common.sh@829 -- # '[' -z 54931 ']' 00:05:12.944 04:20:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.944 04:20:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.944 04:20:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.944 04:20:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.944 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.944 04:20:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.944 04:20:15 -- common/autotest_common.sh@862 -- # return 0 00:05:12.944 04:20:15 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.944 Malloc0 00:05:12.944 04:20:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.203 Malloc1 00:05:13.203 04:20:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.203 04:20:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.462 /dev/nbd0 00:05:13.462 04:20:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.462 04:20:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.462 04:20:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:13.462 04:20:16 -- common/autotest_common.sh@867 -- # local i 00:05:13.462 04:20:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.462 04:20:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.462 04:20:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:13.462 04:20:16 -- common/autotest_common.sh@871 -- # break 00:05:13.462 04:20:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.462 04:20:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.462 04:20:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.462 1+0 records in 00:05:13.462 1+0 records out 00:05:13.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210254 s, 19.5 MB/s 00:05:13.462 04:20:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.462 04:20:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:13.462 04:20:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.462 04:20:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.462 04:20:16 -- common/autotest_common.sh@887 -- # return 0 00:05:13.462 04:20:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.462 04:20:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.462 04:20:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.722 /dev/nbd1 00:05:13.722 04:20:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.722 04:20:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.722 04:20:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:13.722 04:20:16 -- common/autotest_common.sh@867 -- # local i 00:05:13.722 04:20:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.722 04:20:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.722 04:20:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:13.722 04:20:16 -- common/autotest_common.sh@871 -- # break 00:05:13.722 04:20:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.722 04:20:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.722 04:20:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.722 1+0 records in 00:05:13.722 1+0 records out 00:05:13.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243424 s, 16.8 MB/s 00:05:13.722 04:20:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.722 04:20:16 -- common/autotest_common.sh@884 -- # size=4096 00:05:13.722 04:20:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.722 04:20:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.722 04:20:16 -- common/autotest_common.sh@887 -- # return 0 00:05:13.722 04:20:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.722 04:20:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.722 04:20:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.722 04:20:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.722 04:20:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.982 { 00:05:13.982 "nbd_device": "/dev/nbd0", 00:05:13.982 "bdev_name": "Malloc0" 00:05:13.982 }, 00:05:13.982 { 00:05:13.982 "nbd_device": "/dev/nbd1", 00:05:13.982 "bdev_name": "Malloc1" 00:05:13.982 } 00:05:13.982 ]' 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.982 { 00:05:13.982 "nbd_device": "/dev/nbd0", 00:05:13.982 "bdev_name": "Malloc0" 00:05:13.982 }, 00:05:13.982 { 00:05:13.982 "nbd_device": "/dev/nbd1", 00:05:13.982 "bdev_name": "Malloc1" 00:05:13.982 } 00:05:13.982 ]' 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.982 /dev/nbd1' 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.982 /dev/nbd1' 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.982 256+0 records in 00:05:13.982 256+0 records out 00:05:13.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473015 s, 222 MB/s 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.982 04:20:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.241 256+0 records in 00:05:14.241 256+0 records out 00:05:14.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210201 s, 49.9 MB/s 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.241 256+0 records in 00:05:14.241 256+0 records out 00:05:14.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243526 s, 43.1 MB/s 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.241 04:20:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.242 04:20:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@41 -- # break 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@41 -- # break 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.501 04:20:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@65 -- # true 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.068 04:20:18 -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.068 04:20:18 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.326 04:20:18 -- event/event.sh@35 -- # sleep 3 00:05:15.326 [2024-12-07 04:20:18.519331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.585 [2024-12-07 04:20:18.571830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.585 [2024-12-07 04:20:18.571836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.585 [2024-12-07 04:20:18.604635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.585 [2024-12-07 04:20:18.604723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.870 04:20:21 -- event/event.sh@38 -- # waitforlisten 54931 /var/tmp/spdk-nbd.sock 00:05:18.870 04:20:21 -- common/autotest_common.sh@829 -- # '[' -z 54931 ']' 00:05:18.870 04:20:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.870 04:20:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.870 04:20:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.870 04:20:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.870 04:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.870 04:20:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.870 04:20:21 -- common/autotest_common.sh@862 -- # return 0 00:05:18.870 04:20:21 -- event/event.sh@39 -- # killprocess 54931 00:05:18.870 04:20:21 -- common/autotest_common.sh@936 -- # '[' -z 54931 ']' 00:05:18.870 04:20:21 -- common/autotest_common.sh@940 -- # kill -0 54931 00:05:18.870 04:20:21 -- common/autotest_common.sh@941 -- # uname 00:05:18.870 04:20:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.870 04:20:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54931 00:05:18.870 killing process with pid 54931 00:05:18.870 04:20:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.870 04:20:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.870 04:20:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54931' 00:05:18.870 04:20:21 -- common/autotest_common.sh@955 -- # kill 54931 00:05:18.870 04:20:21 -- common/autotest_common.sh@960 -- # wait 54931 00:05:18.870 spdk_app_start is called in Round 0. 00:05:18.870 Shutdown signal received, stop current app iteration 00:05:18.870 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:18.870 spdk_app_start is called in Round 1. 00:05:18.870 Shutdown signal received, stop current app iteration 00:05:18.870 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:18.870 spdk_app_start is called in Round 2. 00:05:18.870 Shutdown signal received, stop current app iteration 00:05:18.870 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:18.870 spdk_app_start is called in Round 3. 00:05:18.870 Shutdown signal received, stop current app iteration 00:05:18.870 04:20:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.870 04:20:21 -- event/event.sh@42 -- # return 0 00:05:18.870 00:05:18.870 real 0m17.833s 00:05:18.870 user 0m40.519s 00:05:18.870 sys 0m2.307s 00:05:18.870 04:20:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.870 04:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.870 ************************************ 00:05:18.870 END TEST app_repeat 00:05:18.870 ************************************ 00:05:18.870 04:20:21 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.870 04:20:21 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.870 04:20:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.870 04:20:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.870 04:20:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.870 ************************************ 00:05:18.870 START TEST cpu_locks 00:05:18.870 ************************************ 00:05:18.870 04:20:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.870 * Looking for test storage... 00:05:18.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.870 04:20:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.870 04:20:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.870 04:20:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.870 04:20:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.870 04:20:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.870 04:20:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.870 04:20:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.870 04:20:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.870 04:20:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.870 04:20:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.870 04:20:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.870 04:20:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.870 04:20:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.870 04:20:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.870 04:20:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.870 04:20:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.870 04:20:22 -- scripts/common.sh@344 -- # : 1 00:05:18.870 04:20:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.870 04:20:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.870 04:20:22 -- scripts/common.sh@364 -- # decimal 1 00:05:18.870 04:20:22 -- scripts/common.sh@352 -- # local d=1 00:05:18.870 04:20:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.870 04:20:22 -- scripts/common.sh@354 -- # echo 1 00:05:18.870 04:20:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.870 04:20:22 -- scripts/common.sh@365 -- # decimal 2 00:05:18.870 04:20:22 -- scripts/common.sh@352 -- # local d=2 00:05:18.870 04:20:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.870 04:20:22 -- scripts/common.sh@354 -- # echo 2 00:05:18.870 04:20:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.870 04:20:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.870 04:20:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.870 04:20:22 -- scripts/common.sh@367 -- # return 0 00:05:18.870 04:20:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.870 04:20:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.870 --rc genhtml_branch_coverage=1 00:05:18.870 --rc genhtml_function_coverage=1 00:05:18.870 --rc genhtml_legend=1 00:05:18.870 --rc geninfo_all_blocks=1 00:05:18.870 --rc geninfo_unexecuted_blocks=1 00:05:18.870 00:05:18.871 ' 00:05:18.871 04:20:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.871 --rc genhtml_branch_coverage=1 00:05:18.871 --rc genhtml_function_coverage=1 00:05:18.871 --rc genhtml_legend=1 00:05:18.871 --rc geninfo_all_blocks=1 00:05:18.871 --rc geninfo_unexecuted_blocks=1 00:05:18.871 00:05:18.871 ' 00:05:18.871 04:20:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.871 --rc genhtml_branch_coverage=1 00:05:18.871 --rc genhtml_function_coverage=1 00:05:18.871 --rc genhtml_legend=1 00:05:18.871 --rc geninfo_all_blocks=1 00:05:18.871 --rc geninfo_unexecuted_blocks=1 00:05:18.871 00:05:18.871 ' 00:05:18.871 04:20:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.871 --rc genhtml_branch_coverage=1 00:05:18.871 --rc genhtml_function_coverage=1 00:05:18.871 --rc genhtml_legend=1 00:05:18.871 --rc geninfo_all_blocks=1 00:05:18.871 --rc geninfo_unexecuted_blocks=1 00:05:18.871 00:05:18.871 ' 00:05:18.871 04:20:22 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.871 04:20:22 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.871 04:20:22 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.871 04:20:22 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.871 04:20:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.871 04:20:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.871 04:20:22 -- common/autotest_common.sh@10 -- # set +x 00:05:18.871 ************************************ 00:05:18.871 START TEST default_locks 00:05:18.871 ************************************ 00:05:18.871 04:20:22 -- common/autotest_common.sh@1114 -- # default_locks 00:05:18.871 04:20:22 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55357 00:05:18.871 04:20:22 -- event/cpu_locks.sh@47 -- # waitforlisten 55357 00:05:18.871 04:20:22 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.871 04:20:22 -- common/autotest_common.sh@829 -- # '[' -z 55357 ']' 00:05:18.871 04:20:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.871 04:20:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.871 04:20:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.871 04:20:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.871 04:20:22 -- common/autotest_common.sh@10 -- # set +x 00:05:19.129 [2024-12-07 04:20:22.136724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.129 [2024-12-07 04:20:22.136829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55357 ] 00:05:19.129 [2024-12-07 04:20:22.269280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.129 [2024-12-07 04:20:22.321987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.130 [2024-12-07 04:20:22.322184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.065 04:20:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.065 04:20:23 -- common/autotest_common.sh@862 -- # return 0 00:05:20.065 04:20:23 -- event/cpu_locks.sh@49 -- # locks_exist 55357 00:05:20.065 04:20:23 -- event/cpu_locks.sh@22 -- # lslocks -p 55357 00:05:20.065 04:20:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.324 04:20:23 -- event/cpu_locks.sh@50 -- # killprocess 55357 00:05:20.324 04:20:23 -- common/autotest_common.sh@936 -- # '[' -z 55357 ']' 00:05:20.324 04:20:23 -- common/autotest_common.sh@940 -- # kill -0 55357 00:05:20.324 04:20:23 -- common/autotest_common.sh@941 -- # uname 00:05:20.324 04:20:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.324 04:20:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55357 00:05:20.324 04:20:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.324 04:20:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.324 killing process with pid 55357 00:05:20.324 04:20:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55357' 00:05:20.324 04:20:23 -- common/autotest_common.sh@955 -- # kill 55357 00:05:20.324 04:20:23 -- common/autotest_common.sh@960 -- # wait 55357 00:05:20.584 04:20:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55357 00:05:20.584 04:20:23 -- common/autotest_common.sh@650 -- # local es=0 00:05:20.584 04:20:23 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55357 00:05:20.584 04:20:23 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:20.584 04:20:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.584 04:20:23 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:20.584 04:20:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.584 04:20:23 -- common/autotest_common.sh@653 -- # waitforlisten 55357 00:05:20.584 04:20:23 -- common/autotest_common.sh@829 -- # '[' -z 55357 ']' 00:05:20.584 04:20:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.584 04:20:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.584 04:20:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.584 04:20:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.584 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:20.584 ERROR: process (pid: 55357) is no longer running 00:05:20.584 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55357) - No such process 00:05:20.584 04:20:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.584 04:20:23 -- common/autotest_common.sh@862 -- # return 1 00:05:20.584 04:20:23 -- common/autotest_common.sh@653 -- # es=1 00:05:20.584 04:20:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.584 04:20:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.584 04:20:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.584 04:20:23 -- event/cpu_locks.sh@54 -- # no_locks 00:05:20.584 04:20:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.584 04:20:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.584 04:20:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.584 00:05:20.584 real 0m1.691s 00:05:20.584 user 0m1.889s 00:05:20.584 sys 0m0.444s 00:05:20.584 04:20:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.584 ************************************ 00:05:20.584 END TEST default_locks 00:05:20.584 ************************************ 00:05:20.584 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:20.584 04:20:23 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:20.584 04:20:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.584 04:20:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.584 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:20.584 ************************************ 00:05:20.584 START TEST default_locks_via_rpc 00:05:20.584 ************************************ 00:05:20.585 04:20:23 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:20.585 04:20:23 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55409 00:05:20.585 04:20:23 -- event/cpu_locks.sh@63 -- # waitforlisten 55409 00:05:20.585 04:20:23 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.585 04:20:23 -- common/autotest_common.sh@829 -- # '[' -z 55409 ']' 00:05:20.585 04:20:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.585 04:20:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.585 04:20:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.585 04:20:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.585 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:05:20.844 [2024-12-07 04:20:23.873523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.844 [2024-12-07 04:20:23.873623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55409 ] 00:05:20.844 [2024-12-07 04:20:24.011261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.844 [2024-12-07 04:20:24.061614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.844 [2024-12-07 04:20:24.061808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.782 04:20:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.782 04:20:24 -- common/autotest_common.sh@862 -- # return 0 00:05:21.782 04:20:24 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:21.782 04:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.782 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.782 04:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.782 04:20:24 -- event/cpu_locks.sh@67 -- # no_locks 00:05:21.782 04:20:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.782 04:20:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.782 04:20:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.782 04:20:24 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:21.782 04:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.782 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.782 04:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.782 04:20:24 -- event/cpu_locks.sh@71 -- # locks_exist 55409 00:05:21.782 04:20:24 -- event/cpu_locks.sh@22 -- # lslocks -p 55409 00:05:21.782 04:20:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.040 04:20:25 -- event/cpu_locks.sh@73 -- # killprocess 55409 00:05:22.040 04:20:25 -- common/autotest_common.sh@936 -- # '[' -z 55409 ']' 00:05:22.040 04:20:25 -- common/autotest_common.sh@940 -- # kill -0 55409 00:05:22.040 04:20:25 -- common/autotest_common.sh@941 -- # uname 00:05:22.040 04:20:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.040 04:20:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55409 00:05:22.040 04:20:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.040 04:20:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.040 killing process with pid 55409 00:05:22.040 04:20:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55409' 00:05:22.040 04:20:25 -- common/autotest_common.sh@955 -- # kill 55409 00:05:22.040 04:20:25 -- common/autotest_common.sh@960 -- # wait 55409 00:05:22.300 00:05:22.300 real 0m1.618s 00:05:22.300 user 0m1.873s 00:05:22.300 sys 0m0.371s 00:05:22.300 04:20:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.300 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.300 ************************************ 00:05:22.300 END TEST default_locks_via_rpc 00:05:22.300 ************************************ 00:05:22.300 04:20:25 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.300 04:20:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.300 04:20:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.300 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.300 ************************************ 00:05:22.300 START TEST non_locking_app_on_locked_coremask 00:05:22.300 ************************************ 00:05:22.300 04:20:25 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:22.300 04:20:25 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55455 00:05:22.300 04:20:25 -- event/cpu_locks.sh@81 -- # waitforlisten 55455 /var/tmp/spdk.sock 00:05:22.300 04:20:25 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.300 04:20:25 -- common/autotest_common.sh@829 -- # '[' -z 55455 ']' 00:05:22.300 04:20:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.300 04:20:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.300 04:20:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.300 04:20:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.300 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.558 [2024-12-07 04:20:25.545810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.558 [2024-12-07 04:20:25.545914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55455 ] 00:05:22.558 [2024-12-07 04:20:25.677015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.558 [2024-12-07 04:20:25.727588] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.558 [2024-12-07 04:20:25.727760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.494 04:20:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.494 04:20:26 -- common/autotest_common.sh@862 -- # return 0 00:05:23.494 04:20:26 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55471 00:05:23.494 04:20:26 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.494 04:20:26 -- event/cpu_locks.sh@85 -- # waitforlisten 55471 /var/tmp/spdk2.sock 00:05:23.494 04:20:26 -- common/autotest_common.sh@829 -- # '[' -z 55471 ']' 00:05:23.494 04:20:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.494 04:20:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.494 04:20:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.494 04:20:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.494 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:05:23.494 [2024-12-07 04:20:26.565408] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.494 [2024-12-07 04:20:26.565538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55471 ] 00:05:23.494 [2024-12-07 04:20:26.721234] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.494 [2024-12-07 04:20:26.721282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.753 [2024-12-07 04:20:26.825128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.753 [2024-12-07 04:20:26.825281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.321 04:20:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.321 04:20:27 -- common/autotest_common.sh@862 -- # return 0 00:05:24.321 04:20:27 -- event/cpu_locks.sh@87 -- # locks_exist 55455 00:05:24.321 04:20:27 -- event/cpu_locks.sh@22 -- # lslocks -p 55455 00:05:24.321 04:20:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.890 04:20:28 -- event/cpu_locks.sh@89 -- # killprocess 55455 00:05:24.890 04:20:28 -- common/autotest_common.sh@936 -- # '[' -z 55455 ']' 00:05:24.890 04:20:28 -- common/autotest_common.sh@940 -- # kill -0 55455 00:05:24.890 04:20:28 -- common/autotest_common.sh@941 -- # uname 00:05:24.890 04:20:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.890 04:20:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55455 00:05:24.890 04:20:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.890 04:20:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.890 killing process with pid 55455 00:05:24.890 04:20:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55455' 00:05:24.890 04:20:28 -- common/autotest_common.sh@955 -- # kill 55455 00:05:24.890 04:20:28 -- common/autotest_common.sh@960 -- # wait 55455 00:05:25.458 04:20:28 -- event/cpu_locks.sh@90 -- # killprocess 55471 00:05:25.458 04:20:28 -- common/autotest_common.sh@936 -- # '[' -z 55471 ']' 00:05:25.458 04:20:28 -- common/autotest_common.sh@940 -- # kill -0 55471 00:05:25.458 04:20:28 -- common/autotest_common.sh@941 -- # uname 00:05:25.458 04:20:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.458 04:20:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55471 00:05:25.458 04:20:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.458 04:20:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.458 killing process with pid 55471 00:05:25.459 04:20:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55471' 00:05:25.459 04:20:28 -- common/autotest_common.sh@955 -- # kill 55471 00:05:25.459 04:20:28 -- common/autotest_common.sh@960 -- # wait 55471 00:05:26.028 00:05:26.028 real 0m3.476s 00:05:26.028 user 0m4.094s 00:05:26.028 sys 0m0.810s 00:05:26.028 04:20:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.028 04:20:28 -- common/autotest_common.sh@10 -- # set +x 00:05:26.028 ************************************ 00:05:26.028 END TEST non_locking_app_on_locked_coremask 00:05:26.028 ************************************ 00:05:26.028 04:20:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:26.028 04:20:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.028 04:20:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.028 04:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:26.028 ************************************ 00:05:26.028 START TEST locking_app_on_unlocked_coremask 00:05:26.028 ************************************ 00:05:26.028 04:20:29 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:26.028 04:20:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55531 00:05:26.028 04:20:29 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:26.028 04:20:29 -- event/cpu_locks.sh@99 -- # waitforlisten 55531 /var/tmp/spdk.sock 00:05:26.028 04:20:29 -- common/autotest_common.sh@829 -- # '[' -z 55531 ']' 00:05:26.028 04:20:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.028 04:20:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.028 04:20:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.028 04:20:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.028 04:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:26.028 [2024-12-07 04:20:29.065637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.028 [2024-12-07 04:20:29.065744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55531 ] 00:05:26.028 [2024-12-07 04:20:29.195440] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.028 [2024-12-07 04:20:29.195503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.028 [2024-12-07 04:20:29.247226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.028 [2024-12-07 04:20:29.247401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.966 04:20:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.966 04:20:29 -- common/autotest_common.sh@862 -- # return 0 00:05:26.966 04:20:29 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55543 00:05:26.966 04:20:29 -- event/cpu_locks.sh@103 -- # waitforlisten 55543 /var/tmp/spdk2.sock 00:05:26.966 04:20:29 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.966 04:20:29 -- common/autotest_common.sh@829 -- # '[' -z 55543 ']' 00:05:26.966 04:20:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.966 04:20:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.966 04:20:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.966 04:20:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.966 04:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:26.966 [2024-12-07 04:20:30.058731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.966 [2024-12-07 04:20:30.058824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55543 ] 00:05:26.966 [2024-12-07 04:20:30.198876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.225 [2024-12-07 04:20:30.301849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.225 [2024-12-07 04:20:30.302022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.794 04:20:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.794 04:20:30 -- common/autotest_common.sh@862 -- # return 0 00:05:27.794 04:20:30 -- event/cpu_locks.sh@105 -- # locks_exist 55543 00:05:27.794 04:20:30 -- event/cpu_locks.sh@22 -- # lslocks -p 55543 00:05:27.794 04:20:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.731 04:20:31 -- event/cpu_locks.sh@107 -- # killprocess 55531 00:05:28.731 04:20:31 -- common/autotest_common.sh@936 -- # '[' -z 55531 ']' 00:05:28.731 04:20:31 -- common/autotest_common.sh@940 -- # kill -0 55531 00:05:28.731 04:20:31 -- common/autotest_common.sh@941 -- # uname 00:05:28.731 04:20:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.731 04:20:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55531 00:05:28.731 04:20:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.731 killing process with pid 55531 00:05:28.731 04:20:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.731 04:20:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55531' 00:05:28.731 04:20:31 -- common/autotest_common.sh@955 -- # kill 55531 00:05:28.731 04:20:31 -- common/autotest_common.sh@960 -- # wait 55531 00:05:29.300 04:20:32 -- event/cpu_locks.sh@108 -- # killprocess 55543 00:05:29.300 04:20:32 -- common/autotest_common.sh@936 -- # '[' -z 55543 ']' 00:05:29.300 04:20:32 -- common/autotest_common.sh@940 -- # kill -0 55543 00:05:29.300 04:20:32 -- common/autotest_common.sh@941 -- # uname 00:05:29.300 04:20:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.300 04:20:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55543 00:05:29.300 04:20:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.300 killing process with pid 55543 00:05:29.300 04:20:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.300 04:20:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55543' 00:05:29.300 04:20:32 -- common/autotest_common.sh@955 -- # kill 55543 00:05:29.300 04:20:32 -- common/autotest_common.sh@960 -- # wait 55543 00:05:29.560 00:05:29.560 real 0m3.651s 00:05:29.560 user 0m4.242s 00:05:29.560 sys 0m0.895s 00:05:29.560 ************************************ 00:05:29.560 END TEST locking_app_on_unlocked_coremask 00:05:29.560 04:20:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.560 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:29.560 ************************************ 00:05:29.560 04:20:32 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.560 04:20:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.560 04:20:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.560 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:29.560 ************************************ 00:05:29.560 START TEST locking_app_on_locked_coremask 00:05:29.560 ************************************ 00:05:29.560 04:20:32 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:29.560 04:20:32 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55610 00:05:29.560 04:20:32 -- event/cpu_locks.sh@116 -- # waitforlisten 55610 /var/tmp/spdk.sock 00:05:29.560 04:20:32 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.560 04:20:32 -- common/autotest_common.sh@829 -- # '[' -z 55610 ']' 00:05:29.560 04:20:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.560 04:20:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.560 04:20:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.560 04:20:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.560 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:29.560 [2024-12-07 04:20:32.776426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.560 [2024-12-07 04:20:32.776538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55610 ] 00:05:29.820 [2024-12-07 04:20:32.910830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.820 [2024-12-07 04:20:32.961850] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.820 [2024-12-07 04:20:32.962029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.759 04:20:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.759 04:20:33 -- common/autotest_common.sh@862 -- # return 0 00:05:30.759 04:20:33 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55626 00:05:30.759 04:20:33 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55626 /var/tmp/spdk2.sock 00:05:30.759 04:20:33 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.759 04:20:33 -- common/autotest_common.sh@650 -- # local es=0 00:05:30.759 04:20:33 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55626 /var/tmp/spdk2.sock 00:05:30.759 04:20:33 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.759 04:20:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.759 04:20:33 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.759 04:20:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.759 04:20:33 -- common/autotest_common.sh@653 -- # waitforlisten 55626 /var/tmp/spdk2.sock 00:05:30.759 04:20:33 -- common/autotest_common.sh@829 -- # '[' -z 55626 ']' 00:05:30.759 04:20:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.759 04:20:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.759 04:20:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.759 04:20:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.759 04:20:33 -- common/autotest_common.sh@10 -- # set +x 00:05:30.759 [2024-12-07 04:20:33.804525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.759 [2024-12-07 04:20:33.804631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55626 ] 00:05:30.759 [2024-12-07 04:20:33.945035] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55610 has claimed it. 00:05:30.759 [2024-12-07 04:20:33.945121] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.327 ERROR: process (pid: 55626) is no longer running 00:05:31.327 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55626) - No such process 00:05:31.327 04:20:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.327 04:20:34 -- common/autotest_common.sh@862 -- # return 1 00:05:31.327 04:20:34 -- common/autotest_common.sh@653 -- # es=1 00:05:31.327 04:20:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.327 04:20:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.327 04:20:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.327 04:20:34 -- event/cpu_locks.sh@122 -- # locks_exist 55610 00:05:31.327 04:20:34 -- event/cpu_locks.sh@22 -- # lslocks -p 55610 00:05:31.327 04:20:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.586 04:20:34 -- event/cpu_locks.sh@124 -- # killprocess 55610 00:05:31.586 04:20:34 -- common/autotest_common.sh@936 -- # '[' -z 55610 ']' 00:05:31.586 04:20:34 -- common/autotest_common.sh@940 -- # kill -0 55610 00:05:31.586 04:20:34 -- common/autotest_common.sh@941 -- # uname 00:05:31.586 04:20:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.586 04:20:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55610 00:05:31.845 killing process with pid 55610 00:05:31.845 04:20:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.845 04:20:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.845 04:20:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55610' 00:05:31.845 04:20:34 -- common/autotest_common.sh@955 -- # kill 55610 00:05:31.845 04:20:34 -- common/autotest_common.sh@960 -- # wait 55610 00:05:32.103 ************************************ 00:05:32.103 END TEST locking_app_on_locked_coremask 00:05:32.103 ************************************ 00:05:32.103 00:05:32.103 real 0m2.392s 00:05:32.103 user 0m2.910s 00:05:32.103 sys 0m0.460s 00:05:32.103 04:20:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.103 04:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.103 04:20:35 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:32.103 04:20:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.103 04:20:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.103 04:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.103 ************************************ 00:05:32.103 START TEST locking_overlapped_coremask 00:05:32.103 ************************************ 00:05:32.103 04:20:35 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:32.103 04:20:35 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55666 00:05:32.103 04:20:35 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:32.103 04:20:35 -- event/cpu_locks.sh@133 -- # waitforlisten 55666 /var/tmp/spdk.sock 00:05:32.103 04:20:35 -- common/autotest_common.sh@829 -- # '[' -z 55666 ']' 00:05:32.103 04:20:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.103 04:20:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.103 04:20:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.103 04:20:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.103 04:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.103 [2024-12-07 04:20:35.214065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.103 [2024-12-07 04:20:35.214156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55666 ] 00:05:32.361 [2024-12-07 04:20:35.346484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.361 [2024-12-07 04:20:35.403558] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.361 [2024-12-07 04:20:35.404023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.361 [2024-12-07 04:20:35.404090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.361 [2024-12-07 04:20:35.404237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.298 04:20:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.298 04:20:36 -- common/autotest_common.sh@862 -- # return 0 00:05:33.298 04:20:36 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:33.298 04:20:36 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55684 00:05:33.298 04:20:36 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55684 /var/tmp/spdk2.sock 00:05:33.298 04:20:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:33.298 04:20:36 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55684 /var/tmp/spdk2.sock 00:05:33.298 04:20:36 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:33.298 04:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.298 04:20:36 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:33.298 04:20:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.298 04:20:36 -- common/autotest_common.sh@653 -- # waitforlisten 55684 /var/tmp/spdk2.sock 00:05:33.298 04:20:36 -- common/autotest_common.sh@829 -- # '[' -z 55684 ']' 00:05:33.298 04:20:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.298 04:20:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.298 04:20:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.298 04:20:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.298 04:20:36 -- common/autotest_common.sh@10 -- # set +x 00:05:33.298 [2024-12-07 04:20:36.278915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.298 [2024-12-07 04:20:36.279178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55684 ] 00:05:33.298 [2024-12-07 04:20:36.421378] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55666 has claimed it. 00:05:33.298 [2024-12-07 04:20:36.421461] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.867 ERROR: process (pid: 55684) is no longer running 00:05:33.867 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55684) - No such process 00:05:33.867 04:20:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.867 04:20:36 -- common/autotest_common.sh@862 -- # return 1 00:05:33.867 04:20:36 -- common/autotest_common.sh@653 -- # es=1 00:05:33.867 04:20:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.867 04:20:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.867 04:20:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.867 04:20:36 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:33.867 04:20:37 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.867 04:20:37 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.867 04:20:37 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.867 04:20:37 -- event/cpu_locks.sh@141 -- # killprocess 55666 00:05:33.867 04:20:37 -- common/autotest_common.sh@936 -- # '[' -z 55666 ']' 00:05:33.867 04:20:37 -- common/autotest_common.sh@940 -- # kill -0 55666 00:05:33.867 04:20:37 -- common/autotest_common.sh@941 -- # uname 00:05:33.867 04:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.867 04:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55666 00:05:33.867 04:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.867 04:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.867 04:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55666' 00:05:33.867 killing process with pid 55666 00:05:33.867 04:20:37 -- common/autotest_common.sh@955 -- # kill 55666 00:05:33.867 04:20:37 -- common/autotest_common.sh@960 -- # wait 55666 00:05:34.142 00:05:34.142 real 0m2.151s 00:05:34.142 user 0m6.252s 00:05:34.142 sys 0m0.291s 00:05:34.142 04:20:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.142 04:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:34.142 ************************************ 00:05:34.142 END TEST locking_overlapped_coremask 00:05:34.142 ************************************ 00:05:34.142 04:20:37 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:34.142 04:20:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.142 04:20:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.142 04:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:34.142 ************************************ 00:05:34.142 START TEST locking_overlapped_coremask_via_rpc 00:05:34.142 ************************************ 00:05:34.142 04:20:37 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:34.142 04:20:37 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55728 00:05:34.142 04:20:37 -- event/cpu_locks.sh@149 -- # waitforlisten 55728 /var/tmp/spdk.sock 00:05:34.142 04:20:37 -- common/autotest_common.sh@829 -- # '[' -z 55728 ']' 00:05:34.142 04:20:37 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:34.142 04:20:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.142 04:20:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.142 04:20:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.142 04:20:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.142 04:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:34.417 [2024-12-07 04:20:37.426434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.417 [2024-12-07 04:20:37.426532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55728 ] 00:05:34.417 [2024-12-07 04:20:37.566424] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.417 [2024-12-07 04:20:37.566470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.417 [2024-12-07 04:20:37.637748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.417 [2024-12-07 04:20:37.638288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.417 [2024-12-07 04:20:37.638432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.417 [2024-12-07 04:20:37.638441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.354 04:20:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.354 04:20:38 -- common/autotest_common.sh@862 -- # return 0 00:05:35.354 04:20:38 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:35.354 04:20:38 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55746 00:05:35.354 04:20:38 -- event/cpu_locks.sh@153 -- # waitforlisten 55746 /var/tmp/spdk2.sock 00:05:35.354 04:20:38 -- common/autotest_common.sh@829 -- # '[' -z 55746 ']' 00:05:35.354 04:20:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.354 04:20:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.354 04:20:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.354 04:20:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.354 04:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.354 [2024-12-07 04:20:38.469557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.354 [2024-12-07 04:20:38.469926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55746 ] 00:05:35.611 [2024-12-07 04:20:38.611219] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.611 [2024-12-07 04:20:38.611270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.611 [2024-12-07 04:20:38.715695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.611 [2024-12-07 04:20:38.716154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.611 [2024-12-07 04:20:38.719762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:35.611 [2024-12-07 04:20:38.719763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.176 04:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.176 04:20:39 -- common/autotest_common.sh@862 -- # return 0 00:05:36.176 04:20:39 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.176 04:20:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.176 04:20:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.176 04:20:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.176 04:20:39 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.176 04:20:39 -- common/autotest_common.sh@650 -- # local es=0 00:05:36.176 04:20:39 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.176 04:20:39 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:36.176 04:20:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.176 04:20:39 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:36.176 04:20:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.176 04:20:39 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.176 04:20:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.176 04:20:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.176 [2024-12-07 04:20:39.408754] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55728 has claimed it. 00:05:36.176 request: 00:05:36.176 { 00:05:36.176 "method": "framework_enable_cpumask_locks", 00:05:36.176 "req_id": 1 00:05:36.176 } 00:05:36.176 Got JSON-RPC error response 00:05:36.176 response: 00:05:36.176 { 00:05:36.176 "code": -32603, 00:05:36.176 "message": "Failed to claim CPU core: 2" 00:05:36.176 } 00:05:36.176 04:20:39 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:36.176 04:20:39 -- common/autotest_common.sh@653 -- # es=1 00:05:36.176 04:20:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:36.176 04:20:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:36.176 04:20:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:36.176 04:20:39 -- event/cpu_locks.sh@158 -- # waitforlisten 55728 /var/tmp/spdk.sock 00:05:36.176 04:20:39 -- common/autotest_common.sh@829 -- # '[' -z 55728 ']' 00:05:36.176 04:20:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.176 04:20:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.176 04:20:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.176 04:20:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.176 04:20:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 04:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.741 04:20:39 -- common/autotest_common.sh@862 -- # return 0 00:05:36.741 04:20:39 -- event/cpu_locks.sh@159 -- # waitforlisten 55746 /var/tmp/spdk2.sock 00:05:36.741 04:20:39 -- common/autotest_common.sh@829 -- # '[' -z 55746 ']' 00:05:36.741 04:20:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.741 04:20:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.741 04:20:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.741 04:20:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.741 04:20:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.741 ************************************ 00:05:36.741 END TEST locking_overlapped_coremask_via_rpc 00:05:36.741 ************************************ 00:05:36.741 04:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.741 04:20:39 -- common/autotest_common.sh@862 -- # return 0 00:05:36.741 04:20:39 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.741 04:20:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.741 04:20:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.741 04:20:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.741 00:05:36.741 real 0m2.609s 00:05:36.741 user 0m1.382s 00:05:36.741 sys 0m0.144s 00:05:36.741 04:20:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.741 04:20:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.999 04:20:40 -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.999 04:20:40 -- event/cpu_locks.sh@15 -- # [[ -z 55728 ]] 00:05:36.999 04:20:40 -- event/cpu_locks.sh@15 -- # killprocess 55728 00:05:36.999 04:20:40 -- common/autotest_common.sh@936 -- # '[' -z 55728 ']' 00:05:36.999 04:20:40 -- common/autotest_common.sh@940 -- # kill -0 55728 00:05:36.999 04:20:40 -- common/autotest_common.sh@941 -- # uname 00:05:36.999 04:20:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.999 04:20:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55728 00:05:36.999 killing process with pid 55728 00:05:36.999 04:20:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.999 04:20:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.999 04:20:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55728' 00:05:36.999 04:20:40 -- common/autotest_common.sh@955 -- # kill 55728 00:05:36.999 04:20:40 -- common/autotest_common.sh@960 -- # wait 55728 00:05:37.257 04:20:40 -- event/cpu_locks.sh@16 -- # [[ -z 55746 ]] 00:05:37.257 04:20:40 -- event/cpu_locks.sh@16 -- # killprocess 55746 00:05:37.257 04:20:40 -- common/autotest_common.sh@936 -- # '[' -z 55746 ']' 00:05:37.257 04:20:40 -- common/autotest_common.sh@940 -- # kill -0 55746 00:05:37.257 04:20:40 -- common/autotest_common.sh@941 -- # uname 00:05:37.257 04:20:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.257 04:20:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55746 00:05:37.257 killing process with pid 55746 00:05:37.257 04:20:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:37.257 04:20:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:37.257 04:20:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55746' 00:05:37.257 04:20:40 -- common/autotest_common.sh@955 -- # kill 55746 00:05:37.257 04:20:40 -- common/autotest_common.sh@960 -- # wait 55746 00:05:37.515 04:20:40 -- event/cpu_locks.sh@18 -- # rm -f 00:05:37.515 04:20:40 -- event/cpu_locks.sh@1 -- # cleanup 00:05:37.515 04:20:40 -- event/cpu_locks.sh@15 -- # [[ -z 55728 ]] 00:05:37.515 04:20:40 -- event/cpu_locks.sh@15 -- # killprocess 55728 00:05:37.515 04:20:40 -- common/autotest_common.sh@936 -- # '[' -z 55728 ']' 00:05:37.515 Process with pid 55728 is not found 00:05:37.515 04:20:40 -- common/autotest_common.sh@940 -- # kill -0 55728 00:05:37.515 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55728) - No such process 00:05:37.515 04:20:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55728 is not found' 00:05:37.515 04:20:40 -- event/cpu_locks.sh@16 -- # [[ -z 55746 ]] 00:05:37.515 04:20:40 -- event/cpu_locks.sh@16 -- # killprocess 55746 00:05:37.515 04:20:40 -- common/autotest_common.sh@936 -- # '[' -z 55746 ']' 00:05:37.515 Process with pid 55746 is not found 00:05:37.515 04:20:40 -- common/autotest_common.sh@940 -- # kill -0 55746 00:05:37.515 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55746) - No such process 00:05:37.515 04:20:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55746 is not found' 00:05:37.515 04:20:40 -- event/cpu_locks.sh@18 -- # rm -f 00:05:37.515 ************************************ 00:05:37.515 END TEST cpu_locks 00:05:37.515 ************************************ 00:05:37.515 00:05:37.515 real 0m18.730s 00:05:37.515 user 0m34.543s 00:05:37.515 sys 0m4.098s 00:05:37.515 04:20:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.515 04:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:37.515 00:05:37.515 real 0m43.889s 00:05:37.515 user 1m25.361s 00:05:37.515 sys 0m7.079s 00:05:37.515 04:20:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.515 04:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:37.515 ************************************ 00:05:37.515 END TEST event 00:05:37.515 ************************************ 00:05:37.515 04:20:40 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:37.515 04:20:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.515 04:20:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.515 04:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:37.515 ************************************ 00:05:37.515 START TEST thread 00:05:37.515 ************************************ 00:05:37.515 04:20:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:37.773 * Looking for test storage... 00:05:37.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:37.773 04:20:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.773 04:20:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.773 04:20:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.773 04:20:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.773 04:20:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.773 04:20:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.773 04:20:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.773 04:20:40 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.773 04:20:40 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.773 04:20:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.773 04:20:40 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.773 04:20:40 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.773 04:20:40 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.773 04:20:40 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.773 04:20:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.773 04:20:40 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.773 04:20:40 -- scripts/common.sh@344 -- # : 1 00:05:37.773 04:20:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.773 04:20:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.773 04:20:40 -- scripts/common.sh@364 -- # decimal 1 00:05:37.773 04:20:40 -- scripts/common.sh@352 -- # local d=1 00:05:37.773 04:20:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.773 04:20:40 -- scripts/common.sh@354 -- # echo 1 00:05:37.773 04:20:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.773 04:20:40 -- scripts/common.sh@365 -- # decimal 2 00:05:37.773 04:20:40 -- scripts/common.sh@352 -- # local d=2 00:05:37.773 04:20:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.773 04:20:40 -- scripts/common.sh@354 -- # echo 2 00:05:37.773 04:20:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.773 04:20:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.773 04:20:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.773 04:20:40 -- scripts/common.sh@367 -- # return 0 00:05:37.773 04:20:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.773 04:20:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.773 --rc genhtml_branch_coverage=1 00:05:37.773 --rc genhtml_function_coverage=1 00:05:37.773 --rc genhtml_legend=1 00:05:37.773 --rc geninfo_all_blocks=1 00:05:37.773 --rc geninfo_unexecuted_blocks=1 00:05:37.773 00:05:37.773 ' 00:05:37.773 04:20:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.773 --rc genhtml_branch_coverage=1 00:05:37.773 --rc genhtml_function_coverage=1 00:05:37.773 --rc genhtml_legend=1 00:05:37.773 --rc geninfo_all_blocks=1 00:05:37.773 --rc geninfo_unexecuted_blocks=1 00:05:37.773 00:05:37.773 ' 00:05:37.773 04:20:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.773 --rc genhtml_branch_coverage=1 00:05:37.773 --rc genhtml_function_coverage=1 00:05:37.773 --rc genhtml_legend=1 00:05:37.773 --rc geninfo_all_blocks=1 00:05:37.773 --rc geninfo_unexecuted_blocks=1 00:05:37.773 00:05:37.773 ' 00:05:37.773 04:20:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.773 --rc genhtml_branch_coverage=1 00:05:37.773 --rc genhtml_function_coverage=1 00:05:37.773 --rc genhtml_legend=1 00:05:37.773 --rc geninfo_all_blocks=1 00:05:37.773 --rc geninfo_unexecuted_blocks=1 00:05:37.773 00:05:37.773 ' 00:05:37.773 04:20:40 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:37.773 04:20:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:37.773 04:20:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.773 04:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:37.773 ************************************ 00:05:37.773 START TEST thread_poller_perf 00:05:37.773 ************************************ 00:05:37.773 04:20:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:37.773 [2024-12-07 04:20:40.913952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.773 [2024-12-07 04:20:40.914219] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55877 ] 00:05:38.030 [2024-12-07 04:20:41.051721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.030 [2024-12-07 04:20:41.098831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.030 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:39.406 [2024-12-07T04:20:42.646Z] ====================================== 00:05:39.406 [2024-12-07T04:20:42.646Z] busy:2210974744 (cyc) 00:05:39.406 [2024-12-07T04:20:42.646Z] total_run_count: 350000 00:05:39.406 [2024-12-07T04:20:42.646Z] tsc_hz: 2200000000 (cyc) 00:05:39.406 [2024-12-07T04:20:42.646Z] ====================================== 00:05:39.406 [2024-12-07T04:20:42.646Z] poller_cost: 6317 (cyc), 2871 (nsec) 00:05:39.406 00:05:39.406 real 0m1.315s 00:05:39.406 user 0m1.158s 00:05:39.406 sys 0m0.048s 00:05:39.406 04:20:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.406 ************************************ 00:05:39.406 END TEST thread_poller_perf 00:05:39.406 ************************************ 00:05:39.406 04:20:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.406 04:20:42 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:39.406 04:20:42 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:39.406 04:20:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.406 04:20:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.406 ************************************ 00:05:39.406 START TEST thread_poller_perf 00:05:39.406 ************************************ 00:05:39.406 04:20:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:39.406 [2024-12-07 04:20:42.280867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.406 [2024-12-07 04:20:42.280972] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55907 ] 00:05:39.406 [2024-12-07 04:20:42.416102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.406 [2024-12-07 04:20:42.467430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.406 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:40.342 [2024-12-07T04:20:43.582Z] ====================================== 00:05:40.342 [2024-12-07T04:20:43.582Z] busy:2202565576 (cyc) 00:05:40.342 [2024-12-07T04:20:43.582Z] total_run_count: 4818000 00:05:40.342 [2024-12-07T04:20:43.582Z] tsc_hz: 2200000000 (cyc) 00:05:40.342 [2024-12-07T04:20:43.582Z] ====================================== 00:05:40.342 [2024-12-07T04:20:43.582Z] poller_cost: 457 (cyc), 207 (nsec) 00:05:40.342 00:05:40.342 real 0m1.295s 00:05:40.342 user 0m1.148s 00:05:40.342 sys 0m0.041s 00:05:40.342 04:20:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.342 ************************************ 00:05:40.342 END TEST thread_poller_perf 00:05:40.342 ************************************ 00:05:40.342 04:20:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.601 04:20:43 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:40.602 ************************************ 00:05:40.602 END TEST thread 00:05:40.602 ************************************ 00:05:40.602 00:05:40.602 real 0m2.883s 00:05:40.602 user 0m2.440s 00:05:40.602 sys 0m0.220s 00:05:40.602 04:20:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.602 04:20:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.602 04:20:43 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:40.602 04:20:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.602 04:20:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.602 04:20:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.602 ************************************ 00:05:40.602 START TEST accel 00:05:40.602 ************************************ 00:05:40.602 04:20:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:40.602 * Looking for test storage... 00:05:40.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:40.602 04:20:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.602 04:20:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.602 04:20:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:40.602 04:20:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.602 04:20:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.602 04:20:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.602 04:20:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.602 04:20:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.602 04:20:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.602 04:20:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.602 04:20:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.602 04:20:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.602 04:20:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.602 04:20:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.602 04:20:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.602 04:20:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.602 04:20:43 -- scripts/common.sh@344 -- # : 1 00:05:40.602 04:20:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.602 04:20:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.602 04:20:43 -- scripts/common.sh@364 -- # decimal 1 00:05:40.602 04:20:43 -- scripts/common.sh@352 -- # local d=1 00:05:40.602 04:20:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.602 04:20:43 -- scripts/common.sh@354 -- # echo 1 00:05:40.602 04:20:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.602 04:20:43 -- scripts/common.sh@365 -- # decimal 2 00:05:40.602 04:20:43 -- scripts/common.sh@352 -- # local d=2 00:05:40.602 04:20:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.602 04:20:43 -- scripts/common.sh@354 -- # echo 2 00:05:40.602 04:20:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.602 04:20:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.602 04:20:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.602 04:20:43 -- scripts/common.sh@367 -- # return 0 00:05:40.602 04:20:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.602 04:20:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.602 --rc genhtml_branch_coverage=1 00:05:40.602 --rc genhtml_function_coverage=1 00:05:40.602 --rc genhtml_legend=1 00:05:40.602 --rc geninfo_all_blocks=1 00:05:40.602 --rc geninfo_unexecuted_blocks=1 00:05:40.602 00:05:40.602 ' 00:05:40.602 04:20:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.602 --rc genhtml_branch_coverage=1 00:05:40.602 --rc genhtml_function_coverage=1 00:05:40.602 --rc genhtml_legend=1 00:05:40.602 --rc geninfo_all_blocks=1 00:05:40.602 --rc geninfo_unexecuted_blocks=1 00:05:40.602 00:05:40.602 ' 00:05:40.602 04:20:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.602 --rc genhtml_branch_coverage=1 00:05:40.602 --rc genhtml_function_coverage=1 00:05:40.602 --rc genhtml_legend=1 00:05:40.602 --rc geninfo_all_blocks=1 00:05:40.602 --rc geninfo_unexecuted_blocks=1 00:05:40.602 00:05:40.602 ' 00:05:40.602 04:20:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.602 --rc genhtml_branch_coverage=1 00:05:40.602 --rc genhtml_function_coverage=1 00:05:40.602 --rc genhtml_legend=1 00:05:40.602 --rc geninfo_all_blocks=1 00:05:40.602 --rc geninfo_unexecuted_blocks=1 00:05:40.602 00:05:40.602 ' 00:05:40.602 04:20:43 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:40.602 04:20:43 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:40.602 04:20:43 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.602 04:20:43 -- accel/accel.sh@59 -- # spdk_tgt_pid=55994 00:05:40.602 04:20:43 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:40.602 04:20:43 -- accel/accel.sh@60 -- # waitforlisten 55994 00:05:40.602 04:20:43 -- accel/accel.sh@58 -- # build_accel_config 00:05:40.602 04:20:43 -- common/autotest_common.sh@829 -- # '[' -z 55994 ']' 00:05:40.602 04:20:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.602 04:20:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.602 04:20:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.602 04:20:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.602 04:20:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.602 04:20:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.602 04:20:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.602 04:20:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.861 04:20:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.861 04:20:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.861 04:20:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.861 04:20:43 -- accel/accel.sh@42 -- # jq -r . 00:05:40.861 [2024-12-07 04:20:43.910391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.861 [2024-12-07 04:20:43.910841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55994 ] 00:05:40.861 [2024-12-07 04:20:44.057537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.120 [2024-12-07 04:20:44.114125] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.120 [2024-12-07 04:20:44.114320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.687 04:20:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.687 04:20:44 -- common/autotest_common.sh@862 -- # return 0 00:05:41.687 04:20:44 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:41.687 04:20:44 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:41.687 04:20:44 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:41.687 04:20:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.687 04:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.687 04:20:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.947 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.947 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.947 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.947 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.947 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.947 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.947 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.947 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.947 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.947 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.947 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.947 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.947 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # IFS== 00:05:41.948 04:20:44 -- accel/accel.sh@64 -- # read -r opc module 00:05:41.948 04:20:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:41.948 04:20:44 -- accel/accel.sh@67 -- # killprocess 55994 00:05:41.948 04:20:44 -- common/autotest_common.sh@936 -- # '[' -z 55994 ']' 00:05:41.948 04:20:44 -- common/autotest_common.sh@940 -- # kill -0 55994 00:05:41.948 04:20:44 -- common/autotest_common.sh@941 -- # uname 00:05:41.948 04:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.948 04:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55994 00:05:41.948 04:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.948 04:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.948 killing process with pid 55994 00:05:41.948 04:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55994' 00:05:41.948 04:20:44 -- common/autotest_common.sh@955 -- # kill 55994 00:05:41.948 04:20:44 -- common/autotest_common.sh@960 -- # wait 55994 00:05:42.207 04:20:45 -- accel/accel.sh@68 -- # trap - ERR 00:05:42.207 04:20:45 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:42.207 04:20:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:42.207 04:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.207 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.207 04:20:45 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:42.207 04:20:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:42.207 04:20:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.207 04:20:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.207 04:20:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.207 04:20:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.207 04:20:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.207 04:20:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.207 04:20:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.207 04:20:45 -- accel/accel.sh@42 -- # jq -r . 00:05:42.207 04:20:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.207 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.207 04:20:45 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:42.207 04:20:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:42.207 04:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.207 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.207 ************************************ 00:05:42.207 START TEST accel_missing_filename 00:05:42.207 ************************************ 00:05:42.207 04:20:45 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:42.207 04:20:45 -- common/autotest_common.sh@650 -- # local es=0 00:05:42.207 04:20:45 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:42.207 04:20:45 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:42.207 04:20:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.207 04:20:45 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:42.207 04:20:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.207 04:20:45 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:42.207 04:20:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:42.207 04:20:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.207 04:20:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.207 04:20:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.207 04:20:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.207 04:20:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.207 04:20:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.208 04:20:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.208 04:20:45 -- accel/accel.sh@42 -- # jq -r . 00:05:42.208 [2024-12-07 04:20:45.375964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.208 [2024-12-07 04:20:45.376095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56040 ] 00:05:42.466 [2024-12-07 04:20:45.512814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.466 [2024-12-07 04:20:45.562325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.466 [2024-12-07 04:20:45.590426] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.466 [2024-12-07 04:20:45.628307] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:42.725 A filename is required. 00:05:42.725 04:20:45 -- common/autotest_common.sh@653 -- # es=234 00:05:42.725 04:20:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.725 04:20:45 -- common/autotest_common.sh@662 -- # es=106 00:05:42.725 04:20:45 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:42.725 04:20:45 -- common/autotest_common.sh@670 -- # es=1 00:05:42.725 04:20:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.725 00:05:42.725 real 0m0.377s 00:05:42.725 user 0m0.243s 00:05:42.725 sys 0m0.075s 00:05:42.725 ************************************ 00:05:42.725 END TEST accel_missing_filename 00:05:42.725 ************************************ 00:05:42.725 04:20:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.725 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.725 04:20:45 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.725 04:20:45 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:42.725 04:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.725 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.725 ************************************ 00:05:42.725 START TEST accel_compress_verify 00:05:42.725 ************************************ 00:05:42.725 04:20:45 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.725 04:20:45 -- common/autotest_common.sh@650 -- # local es=0 00:05:42.725 04:20:45 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.725 04:20:45 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:42.725 04:20:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.725 04:20:45 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:42.725 04:20:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.725 04:20:45 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.725 04:20:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:42.725 04:20:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.725 04:20:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.725 04:20:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.725 04:20:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.725 04:20:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.725 04:20:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.725 04:20:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.725 04:20:45 -- accel/accel.sh@42 -- # jq -r . 00:05:42.725 [2024-12-07 04:20:45.796340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.725 [2024-12-07 04:20:45.797117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56070 ] 00:05:42.725 [2024-12-07 04:20:45.932731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.984 [2024-12-07 04:20:45.982086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.984 [2024-12-07 04:20:46.009744] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.984 [2024-12-07 04:20:46.048962] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:42.984 00:05:42.984 Compression does not support the verify option, aborting. 00:05:42.984 ************************************ 00:05:42.984 END TEST accel_compress_verify 00:05:42.984 ************************************ 00:05:42.984 04:20:46 -- common/autotest_common.sh@653 -- # es=161 00:05:42.984 04:20:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.984 04:20:46 -- common/autotest_common.sh@662 -- # es=33 00:05:42.984 04:20:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:42.984 04:20:46 -- common/autotest_common.sh@670 -- # es=1 00:05:42.984 04:20:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.984 00:05:42.984 real 0m0.360s 00:05:42.984 user 0m0.235s 00:05:42.984 sys 0m0.071s 00:05:42.984 04:20:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.984 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.984 04:20:46 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:42.984 04:20:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:42.984 04:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.984 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.984 ************************************ 00:05:42.984 START TEST accel_wrong_workload 00:05:42.984 ************************************ 00:05:42.984 04:20:46 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:42.984 04:20:46 -- common/autotest_common.sh@650 -- # local es=0 00:05:42.984 04:20:46 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:42.984 04:20:46 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:42.984 04:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.984 04:20:46 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:42.984 04:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.984 04:20:46 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:42.984 04:20:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:42.984 04:20:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.984 04:20:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.984 04:20:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.984 04:20:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.984 04:20:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.984 04:20:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.984 04:20:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.984 04:20:46 -- accel/accel.sh@42 -- # jq -r . 00:05:42.984 Unsupported workload type: foobar 00:05:42.984 [2024-12-07 04:20:46.202280] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:42.984 accel_perf options: 00:05:42.984 [-h help message] 00:05:42.984 [-q queue depth per core] 00:05:42.984 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:42.984 [-T number of threads per core 00:05:42.984 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:42.984 [-t time in seconds] 00:05:42.984 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:42.984 [ dif_verify, , dif_generate, dif_generate_copy 00:05:42.984 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:42.984 [-l for compress/decompress workloads, name of uncompressed input file 00:05:42.984 [-S for crc32c workload, use this seed value (default 0) 00:05:42.984 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:42.984 [-f for fill workload, use this BYTE value (default 255) 00:05:42.984 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:42.984 [-y verify result if this switch is on] 00:05:42.984 [-a tasks to allocate per core (default: same value as -q)] 00:05:42.984 Can be used to spread operations across a wider range of memory. 00:05:42.984 04:20:46 -- common/autotest_common.sh@653 -- # es=1 00:05:42.984 04:20:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.984 04:20:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.984 ************************************ 00:05:42.984 END TEST accel_wrong_workload 00:05:42.984 ************************************ 00:05:42.984 04:20:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.984 00:05:42.984 real 0m0.026s 00:05:42.984 user 0m0.016s 00:05:42.984 sys 0m0.010s 00:05:42.984 04:20:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.984 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:43.242 04:20:46 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:43.242 04:20:46 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:43.242 04:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.242 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:43.242 ************************************ 00:05:43.242 START TEST accel_negative_buffers 00:05:43.242 ************************************ 00:05:43.242 04:20:46 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:43.242 04:20:46 -- common/autotest_common.sh@650 -- # local es=0 00:05:43.242 04:20:46 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:43.242 04:20:46 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:43.242 04:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.242 04:20:46 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:43.242 04:20:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.242 04:20:46 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:43.242 04:20:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:43.242 04:20:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.242 04:20:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.242 04:20:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.242 04:20:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.242 04:20:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.242 04:20:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.242 04:20:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.242 04:20:46 -- accel/accel.sh@42 -- # jq -r . 00:05:43.242 -x option must be non-negative. 00:05:43.242 [2024-12-07 04:20:46.280905] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:43.242 accel_perf options: 00:05:43.242 [-h help message] 00:05:43.242 [-q queue depth per core] 00:05:43.242 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:43.242 [-T number of threads per core 00:05:43.242 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:43.242 [-t time in seconds] 00:05:43.242 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:43.242 [ dif_verify, , dif_generate, dif_generate_copy 00:05:43.242 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:43.242 [-l for compress/decompress workloads, name of uncompressed input file 00:05:43.242 [-S for crc32c workload, use this seed value (default 0) 00:05:43.242 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:43.242 [-f for fill workload, use this BYTE value (default 255) 00:05:43.242 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:43.242 [-y verify result if this switch is on] 00:05:43.242 [-a tasks to allocate per core (default: same value as -q)] 00:05:43.242 Can be used to spread operations across a wider range of memory. 00:05:43.242 04:20:46 -- common/autotest_common.sh@653 -- # es=1 00:05:43.242 ************************************ 00:05:43.242 END TEST accel_negative_buffers 00:05:43.242 ************************************ 00:05:43.242 04:20:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.242 04:20:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.242 04:20:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.242 00:05:43.242 real 0m0.030s 00:05:43.242 user 0m0.013s 00:05:43.242 sys 0m0.017s 00:05:43.242 04:20:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.242 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:43.242 04:20:46 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:43.242 04:20:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:43.242 04:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.242 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:05:43.242 ************************************ 00:05:43.242 START TEST accel_crc32c 00:05:43.243 ************************************ 00:05:43.243 04:20:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:43.243 04:20:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.243 04:20:46 -- accel/accel.sh@17 -- # local accel_module 00:05:43.243 04:20:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:43.243 04:20:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.243 04:20:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:43.243 04:20:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.243 04:20:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.243 04:20:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.243 04:20:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.243 04:20:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.243 04:20:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.243 04:20:46 -- accel/accel.sh@42 -- # jq -r . 00:05:43.243 [2024-12-07 04:20:46.362177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.243 [2024-12-07 04:20:46.362407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56123 ] 00:05:43.500 [2024-12-07 04:20:46.499773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.500 [2024-12-07 04:20:46.547750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.878 04:20:47 -- accel/accel.sh@18 -- # out=' 00:05:44.878 SPDK Configuration: 00:05:44.878 Core mask: 0x1 00:05:44.878 00:05:44.878 Accel Perf Configuration: 00:05:44.878 Workload Type: crc32c 00:05:44.878 CRC-32C seed: 32 00:05:44.878 Transfer size: 4096 bytes 00:05:44.878 Vector count 1 00:05:44.878 Module: software 00:05:44.878 Queue depth: 32 00:05:44.878 Allocate depth: 32 00:05:44.878 # threads/core: 1 00:05:44.878 Run time: 1 seconds 00:05:44.878 Verify: Yes 00:05:44.878 00:05:44.878 Running for 1 seconds... 00:05:44.878 00:05:44.878 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:44.878 ------------------------------------------------------------------------------------ 00:05:44.879 0,0 523712/s 2045 MiB/s 0 0 00:05:44.879 ==================================================================================== 00:05:44.879 Total 523712/s 2045 MiB/s 0 0' 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:44.879 04:20:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.879 04:20:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.879 04:20:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.879 04:20:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.879 04:20:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.879 04:20:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.879 04:20:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.879 04:20:47 -- accel/accel.sh@42 -- # jq -r . 00:05:44.879 [2024-12-07 04:20:47.724474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.879 [2024-12-07 04:20:47.724561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56137 ] 00:05:44.879 [2024-12-07 04:20:47.860280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.879 [2024-12-07 04:20:47.908236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val= 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val= 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=0x1 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val= 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val= 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=crc32c 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=32 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val= 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=software 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=32 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=32 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=1 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val=Yes 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val= 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:44.879 04:20:47 -- accel/accel.sh@21 -- # val= 00:05:44.879 04:20:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # IFS=: 00:05:44.879 04:20:47 -- accel/accel.sh@20 -- # read -r var val 00:05:45.818 04:20:49 -- accel/accel.sh@21 -- # val= 00:05:46.079 04:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # IFS=: 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # read -r var val 00:05:46.079 04:20:49 -- accel/accel.sh@21 -- # val= 00:05:46.079 04:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # IFS=: 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # read -r var val 00:05:46.079 04:20:49 -- accel/accel.sh@21 -- # val= 00:05:46.079 04:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # IFS=: 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # read -r var val 00:05:46.079 04:20:49 -- accel/accel.sh@21 -- # val= 00:05:46.079 04:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # IFS=: 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # read -r var val 00:05:46.079 04:20:49 -- accel/accel.sh@21 -- # val= 00:05:46.079 04:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # IFS=: 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # read -r var val 00:05:46.079 04:20:49 -- accel/accel.sh@21 -- # val= 00:05:46.079 04:20:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # IFS=: 00:05:46.079 04:20:49 -- accel/accel.sh@20 -- # read -r var val 00:05:46.079 04:20:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.079 04:20:49 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:46.079 04:20:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.079 00:05:46.079 real 0m2.728s 00:05:46.079 user 0m2.379s 00:05:46.079 sys 0m0.145s 00:05:46.079 04:20:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.079 04:20:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.079 ************************************ 00:05:46.079 END TEST accel_crc32c 00:05:46.079 ************************************ 00:05:46.079 04:20:49 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:46.079 04:20:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:46.079 04:20:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.079 04:20:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.079 ************************************ 00:05:46.079 START TEST accel_crc32c_C2 00:05:46.079 ************************************ 00:05:46.079 04:20:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:46.079 04:20:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.079 04:20:49 -- accel/accel.sh@17 -- # local accel_module 00:05:46.079 04:20:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:46.079 04:20:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:46.079 04:20:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.079 04:20:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.079 04:20:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.079 04:20:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.079 04:20:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.079 04:20:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.079 04:20:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.079 04:20:49 -- accel/accel.sh@42 -- # jq -r . 00:05:46.079 [2024-12-07 04:20:49.141985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.079 [2024-12-07 04:20:49.142081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56177 ] 00:05:46.079 [2024-12-07 04:20:49.270254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.338 [2024-12-07 04:20:49.319125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.277 04:20:50 -- accel/accel.sh@18 -- # out=' 00:05:47.277 SPDK Configuration: 00:05:47.277 Core mask: 0x1 00:05:47.277 00:05:47.277 Accel Perf Configuration: 00:05:47.277 Workload Type: crc32c 00:05:47.277 CRC-32C seed: 0 00:05:47.277 Transfer size: 4096 bytes 00:05:47.277 Vector count 2 00:05:47.277 Module: software 00:05:47.277 Queue depth: 32 00:05:47.277 Allocate depth: 32 00:05:47.277 # threads/core: 1 00:05:47.277 Run time: 1 seconds 00:05:47.277 Verify: Yes 00:05:47.277 00:05:47.277 Running for 1 seconds... 00:05:47.277 00:05:47.277 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:47.277 ------------------------------------------------------------------------------------ 00:05:47.277 0,0 408064/s 3188 MiB/s 0 0 00:05:47.277 ==================================================================================== 00:05:47.277 Total 408064/s 1594 MiB/s 0 0' 00:05:47.277 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.277 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.277 04:20:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:47.277 04:20:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:47.277 04:20:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.277 04:20:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.277 04:20:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.277 04:20:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.277 04:20:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.277 04:20:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.277 04:20:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.277 04:20:50 -- accel/accel.sh@42 -- # jq -r . 00:05:47.277 [2024-12-07 04:20:50.501023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.277 [2024-12-07 04:20:50.501112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56191 ] 00:05:47.537 [2024-12-07 04:20:50.632683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.537 [2024-12-07 04:20:50.679834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val= 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val= 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=0x1 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val= 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val= 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=crc32c 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=0 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val= 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=software 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@23 -- # accel_module=software 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=32 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=32 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=1 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val=Yes 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val= 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:47.537 04:20:50 -- accel/accel.sh@21 -- # val= 00:05:47.537 04:20:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # IFS=: 00:05:47.537 04:20:50 -- accel/accel.sh@20 -- # read -r var val 00:05:48.915 04:20:51 -- accel/accel.sh@21 -- # val= 00:05:48.915 04:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # IFS=: 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # read -r var val 00:05:48.915 04:20:51 -- accel/accel.sh@21 -- # val= 00:05:48.915 04:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # IFS=: 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # read -r var val 00:05:48.915 04:20:51 -- accel/accel.sh@21 -- # val= 00:05:48.915 04:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # IFS=: 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # read -r var val 00:05:48.915 ************************************ 00:05:48.915 END TEST accel_crc32c_C2 00:05:48.915 ************************************ 00:05:48.915 04:20:51 -- accel/accel.sh@21 -- # val= 00:05:48.915 04:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # IFS=: 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # read -r var val 00:05:48.915 04:20:51 -- accel/accel.sh@21 -- # val= 00:05:48.915 04:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # IFS=: 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # read -r var val 00:05:48.915 04:20:51 -- accel/accel.sh@21 -- # val= 00:05:48.915 04:20:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # IFS=: 00:05:48.915 04:20:51 -- accel/accel.sh@20 -- # read -r var val 00:05:48.915 04:20:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.915 04:20:51 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:48.915 04:20:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.915 00:05:48.915 real 0m2.714s 00:05:48.915 user 0m2.388s 00:05:48.915 sys 0m0.129s 00:05:48.915 04:20:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.915 04:20:51 -- common/autotest_common.sh@10 -- # set +x 00:05:48.915 04:20:51 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:48.915 04:20:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:48.915 04:20:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.915 04:20:51 -- common/autotest_common.sh@10 -- # set +x 00:05:48.915 ************************************ 00:05:48.915 START TEST accel_copy 00:05:48.915 ************************************ 00:05:48.915 04:20:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:48.915 04:20:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.915 04:20:51 -- accel/accel.sh@17 -- # local accel_module 00:05:48.915 04:20:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:48.915 04:20:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:48.915 04:20:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.915 04:20:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.915 04:20:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.915 04:20:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.915 04:20:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.915 04:20:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.915 04:20:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.915 04:20:51 -- accel/accel.sh@42 -- # jq -r . 00:05:48.915 [2024-12-07 04:20:51.906250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.915 [2024-12-07 04:20:51.906331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56226 ] 00:05:48.915 [2024-12-07 04:20:52.035232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.915 [2024-12-07 04:20:52.082717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.296 04:20:53 -- accel/accel.sh@18 -- # out=' 00:05:50.296 SPDK Configuration: 00:05:50.296 Core mask: 0x1 00:05:50.296 00:05:50.296 Accel Perf Configuration: 00:05:50.296 Workload Type: copy 00:05:50.296 Transfer size: 4096 bytes 00:05:50.296 Vector count 1 00:05:50.296 Module: software 00:05:50.296 Queue depth: 32 00:05:50.296 Allocate depth: 32 00:05:50.296 # threads/core: 1 00:05:50.296 Run time: 1 seconds 00:05:50.296 Verify: Yes 00:05:50.296 00:05:50.296 Running for 1 seconds... 00:05:50.296 00:05:50.296 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.296 ------------------------------------------------------------------------------------ 00:05:50.296 0,0 371456/s 1451 MiB/s 0 0 00:05:50.296 ==================================================================================== 00:05:50.296 Total 371456/s 1451 MiB/s 0 0' 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.296 04:20:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.296 04:20:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.296 04:20:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:50.296 04:20:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.296 04:20:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.296 04:20:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.296 04:20:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.296 04:20:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.296 04:20:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.296 04:20:53 -- accel/accel.sh@42 -- # jq -r . 00:05:50.296 [2024-12-07 04:20:53.256519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.296 [2024-12-07 04:20:53.256608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56245 ] 00:05:50.296 [2024-12-07 04:20:53.390608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.296 [2024-12-07 04:20:53.442546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.296 04:20:53 -- accel/accel.sh@21 -- # val= 00:05:50.296 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.296 04:20:53 -- accel/accel.sh@21 -- # val= 00:05:50.296 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.296 04:20:53 -- accel/accel.sh@21 -- # val=0x1 00:05:50.296 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.296 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.296 04:20:53 -- accel/accel.sh@21 -- # val= 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val= 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val=copy 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val= 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val=software 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val=32 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val=32 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val=1 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val=Yes 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val= 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:50.297 04:20:53 -- accel/accel.sh@21 -- # val= 00:05:50.297 04:20:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # IFS=: 00:05:50.297 04:20:53 -- accel/accel.sh@20 -- # read -r var val 00:05:51.675 04:20:54 -- accel/accel.sh@21 -- # val= 00:05:51.675 04:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # IFS=: 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # read -r var val 00:05:51.675 04:20:54 -- accel/accel.sh@21 -- # val= 00:05:51.675 04:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # IFS=: 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # read -r var val 00:05:51.675 04:20:54 -- accel/accel.sh@21 -- # val= 00:05:51.675 04:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # IFS=: 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # read -r var val 00:05:51.675 04:20:54 -- accel/accel.sh@21 -- # val= 00:05:51.675 04:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # IFS=: 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # read -r var val 00:05:51.675 04:20:54 -- accel/accel.sh@21 -- # val= 00:05:51.675 04:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # IFS=: 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # read -r var val 00:05:51.675 04:20:54 -- accel/accel.sh@21 -- # val= 00:05:51.675 04:20:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # IFS=: 00:05:51.675 04:20:54 -- accel/accel.sh@20 -- # read -r var val 00:05:51.675 04:20:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:51.675 04:20:54 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:51.675 04:20:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.675 00:05:51.675 real 0m2.711s 00:05:51.675 user 0m2.379s 00:05:51.675 sys 0m0.132s 00:05:51.675 04:20:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.675 04:20:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.675 ************************************ 00:05:51.675 END TEST accel_copy 00:05:51.675 ************************************ 00:05:51.675 04:20:54 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.675 04:20:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:51.675 04:20:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.675 04:20:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.675 ************************************ 00:05:51.675 START TEST accel_fill 00:05:51.675 ************************************ 00:05:51.675 04:20:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.675 04:20:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.675 04:20:54 -- accel/accel.sh@17 -- # local accel_module 00:05:51.675 04:20:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.675 04:20:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.675 04:20:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.675 04:20:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.675 04:20:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.675 04:20:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.675 04:20:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.675 04:20:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.675 04:20:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.675 04:20:54 -- accel/accel.sh@42 -- # jq -r . 00:05:51.675 [2024-12-07 04:20:54.671111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.675 [2024-12-07 04:20:54.671203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56274 ] 00:05:51.675 [2024-12-07 04:20:54.808493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.675 [2024-12-07 04:20:54.856265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.051 04:20:56 -- accel/accel.sh@18 -- # out=' 00:05:53.051 SPDK Configuration: 00:05:53.051 Core mask: 0x1 00:05:53.051 00:05:53.051 Accel Perf Configuration: 00:05:53.051 Workload Type: fill 00:05:53.051 Fill pattern: 0x80 00:05:53.051 Transfer size: 4096 bytes 00:05:53.051 Vector count 1 00:05:53.051 Module: software 00:05:53.051 Queue depth: 64 00:05:53.051 Allocate depth: 64 00:05:53.051 # threads/core: 1 00:05:53.051 Run time: 1 seconds 00:05:53.051 Verify: Yes 00:05:53.051 00:05:53.051 Running for 1 seconds... 00:05:53.051 00:05:53.051 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.051 ------------------------------------------------------------------------------------ 00:05:53.051 0,0 539008/s 2105 MiB/s 0 0 00:05:53.051 ==================================================================================== 00:05:53.051 Total 539008/s 2105 MiB/s 0 0' 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.051 04:20:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.051 04:20:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.051 04:20:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.051 04:20:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.051 04:20:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.051 04:20:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.051 04:20:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.051 04:20:56 -- accel/accel.sh@42 -- # jq -r . 00:05:53.051 [2024-12-07 04:20:56.034674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.051 [2024-12-07 04:20:56.034773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56294 ] 00:05:53.051 [2024-12-07 04:20:56.166422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.051 [2024-12-07 04:20:56.222102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val= 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val= 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=0x1 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val= 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val= 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=fill 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=0x80 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val= 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=software 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@23 -- # accel_module=software 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=64 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=64 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=1 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val=Yes 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val= 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:53.051 04:20:56 -- accel/accel.sh@21 -- # val= 00:05:53.051 04:20:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # IFS=: 00:05:53.051 04:20:56 -- accel/accel.sh@20 -- # read -r var val 00:05:54.425 04:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.425 04:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.425 04:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.425 04:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.425 04:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.425 04:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.425 04:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.425 04:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.425 04:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.425 04:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.425 04:20:57 -- accel/accel.sh@21 -- # val= 00:05:54.425 04:20:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # IFS=: 00:05:54.425 04:20:57 -- accel/accel.sh@20 -- # read -r var val 00:05:54.425 04:20:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:54.425 04:20:57 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:54.425 04:20:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.425 00:05:54.425 real 0m2.730s 00:05:54.425 user 0m2.384s 00:05:54.425 sys 0m0.145s 00:05:54.425 ************************************ 00:05:54.425 END TEST accel_fill 00:05:54.425 ************************************ 00:05:54.425 04:20:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.425 04:20:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.425 04:20:57 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:54.425 04:20:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:54.425 04:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.425 04:20:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.425 ************************************ 00:05:54.425 START TEST accel_copy_crc32c 00:05:54.425 ************************************ 00:05:54.425 04:20:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:54.425 04:20:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.425 04:20:57 -- accel/accel.sh@17 -- # local accel_module 00:05:54.425 04:20:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:54.425 04:20:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:54.425 04:20:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.425 04:20:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.425 04:20:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.425 04:20:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.425 04:20:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.425 04:20:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.425 04:20:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.425 04:20:57 -- accel/accel.sh@42 -- # jq -r . 00:05:54.425 [2024-12-07 04:20:57.452689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.425 [2024-12-07 04:20:57.453498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56328 ] 00:05:54.425 [2024-12-07 04:20:57.589249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.425 [2024-12-07 04:20:57.637030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.799 04:20:58 -- accel/accel.sh@18 -- # out=' 00:05:55.799 SPDK Configuration: 00:05:55.799 Core mask: 0x1 00:05:55.799 00:05:55.799 Accel Perf Configuration: 00:05:55.799 Workload Type: copy_crc32c 00:05:55.799 CRC-32C seed: 0 00:05:55.799 Vector size: 4096 bytes 00:05:55.799 Transfer size: 4096 bytes 00:05:55.799 Vector count 1 00:05:55.799 Module: software 00:05:55.799 Queue depth: 32 00:05:55.799 Allocate depth: 32 00:05:55.799 # threads/core: 1 00:05:55.799 Run time: 1 seconds 00:05:55.799 Verify: Yes 00:05:55.799 00:05:55.799 Running for 1 seconds... 00:05:55.799 00:05:55.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:55.799 ------------------------------------------------------------------------------------ 00:05:55.799 0,0 288928/s 1128 MiB/s 0 0 00:05:55.799 ==================================================================================== 00:05:55.799 Total 288928/s 1128 MiB/s 0 0' 00:05:55.799 04:20:58 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:55.799 04:20:58 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:55.799 04:20:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.799 04:20:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.799 04:20:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.799 04:20:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.799 04:20:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.799 04:20:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.799 04:20:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.799 04:20:58 -- accel/accel.sh@42 -- # jq -r . 00:05:55.799 [2024-12-07 04:20:58.811370] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.799 [2024-12-07 04:20:58.811483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56342 ] 00:05:55.799 [2024-12-07 04:20:58.947229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.799 [2024-12-07 04:20:58.996137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val= 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val= 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val=0x1 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val= 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val= 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val=0 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val= 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val=software 00:05:55.799 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.799 04:20:59 -- accel/accel.sh@23 -- # accel_module=software 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:55.799 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:55.799 04:20:59 -- accel/accel.sh@21 -- # val=32 00:05:56.056 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.056 04:20:59 -- accel/accel.sh@21 -- # val=32 00:05:56.056 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.056 04:20:59 -- accel/accel.sh@21 -- # val=1 00:05:56.056 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.056 04:20:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.056 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.056 04:20:59 -- accel/accel.sh@21 -- # val=Yes 00:05:56.056 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.056 04:20:59 -- accel/accel.sh@21 -- # val= 00:05:56.056 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.056 04:20:59 -- accel/accel.sh@21 -- # val= 00:05:56.056 04:20:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.056 04:20:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.990 04:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.990 04:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.990 04:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.990 04:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.990 04:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.990 04:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.990 04:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.990 04:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.990 04:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.990 04:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.990 04:21:00 -- accel/accel.sh@21 -- # val= 00:05:56.990 04:21:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # IFS=: 00:05:56.990 04:21:00 -- accel/accel.sh@20 -- # read -r var val 00:05:56.990 04:21:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.990 04:21:00 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:56.990 04:21:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.990 00:05:56.990 real 0m2.729s 00:05:56.990 user 0m2.383s 00:05:56.990 sys 0m0.142s 00:05:56.990 ************************************ 00:05:56.990 END TEST accel_copy_crc32c 00:05:56.990 ************************************ 00:05:56.990 04:21:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.990 04:21:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.990 04:21:00 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:56.990 04:21:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:56.990 04:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.990 04:21:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.990 ************************************ 00:05:56.990 START TEST accel_copy_crc32c_C2 00:05:56.990 ************************************ 00:05:56.990 04:21:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:56.990 04:21:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.990 04:21:00 -- accel/accel.sh@17 -- # local accel_module 00:05:56.991 04:21:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:56.991 04:21:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:56.991 04:21:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.991 04:21:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.991 04:21:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.991 04:21:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.991 04:21:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.991 04:21:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.991 04:21:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.991 04:21:00 -- accel/accel.sh@42 -- # jq -r . 00:05:57.250 [2024-12-07 04:21:00.234214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.250 [2024-12-07 04:21:00.234303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56377 ] 00:05:57.250 [2024-12-07 04:21:00.367329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.250 [2024-12-07 04:21:00.424730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.630 04:21:01 -- accel/accel.sh@18 -- # out=' 00:05:58.630 SPDK Configuration: 00:05:58.630 Core mask: 0x1 00:05:58.630 00:05:58.630 Accel Perf Configuration: 00:05:58.630 Workload Type: copy_crc32c 00:05:58.630 CRC-32C seed: 0 00:05:58.630 Vector size: 4096 bytes 00:05:58.630 Transfer size: 8192 bytes 00:05:58.630 Vector count 2 00:05:58.630 Module: software 00:05:58.630 Queue depth: 32 00:05:58.630 Allocate depth: 32 00:05:58.630 # threads/core: 1 00:05:58.630 Run time: 1 seconds 00:05:58.630 Verify: Yes 00:05:58.630 00:05:58.630 Running for 1 seconds... 00:05:58.630 00:05:58.630 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:58.630 ------------------------------------------------------------------------------------ 00:05:58.630 0,0 198976/s 1554 MiB/s 0 0 00:05:58.630 ==================================================================================== 00:05:58.630 Total 198976/s 777 MiB/s 0 0' 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.630 04:21:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:58.630 04:21:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:58.630 04:21:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.630 04:21:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.630 04:21:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.630 04:21:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.630 04:21:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.630 04:21:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.630 04:21:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.630 04:21:01 -- accel/accel.sh@42 -- # jq -r . 00:05:58.630 [2024-12-07 04:21:01.602299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.630 [2024-12-07 04:21:01.602556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56396 ] 00:05:58.630 [2024-12-07 04:21:01.739453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.630 [2024-12-07 04:21:01.788765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.630 04:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.630 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.630 04:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.630 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.630 04:21:01 -- accel/accel.sh@21 -- # val=0x1 00:05:58.630 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.630 04:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.630 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.630 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.630 04:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.630 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val=0 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val=software 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val=32 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val=32 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val=1 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val=Yes 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:05:58.631 04:21:01 -- accel/accel.sh@21 -- # val= 00:05:58.631 04:21:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # IFS=: 00:05:58.631 04:21:01 -- accel/accel.sh@20 -- # read -r var val 00:06:00.101 04:21:02 -- accel/accel.sh@21 -- # val= 00:06:00.101 04:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.101 04:21:02 -- accel/accel.sh@21 -- # val= 00:06:00.101 04:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.101 04:21:02 -- accel/accel.sh@21 -- # val= 00:06:00.101 04:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.101 04:21:02 -- accel/accel.sh@21 -- # val= 00:06:00.101 04:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.101 04:21:02 -- accel/accel.sh@21 -- # val= 00:06:00.101 04:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.101 04:21:02 -- accel/accel.sh@21 -- # val= 00:06:00.101 04:21:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.101 04:21:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.101 04:21:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.101 04:21:02 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:00.101 04:21:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.101 00:06:00.101 real 0m2.738s 00:06:00.101 user 0m2.388s 00:06:00.101 sys 0m0.151s 00:06:00.101 04:21:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.101 ************************************ 00:06:00.101 END TEST accel_copy_crc32c_C2 00:06:00.101 ************************************ 00:06:00.101 04:21:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.101 04:21:02 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:00.101 04:21:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:00.101 04:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.101 04:21:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.101 ************************************ 00:06:00.101 START TEST accel_dualcast 00:06:00.101 ************************************ 00:06:00.101 04:21:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:00.101 04:21:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.101 04:21:02 -- accel/accel.sh@17 -- # local accel_module 00:06:00.101 04:21:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:00.101 04:21:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:00.101 04:21:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.101 04:21:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.101 04:21:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.101 04:21:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.101 04:21:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.101 04:21:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.101 04:21:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.101 04:21:03 -- accel/accel.sh@42 -- # jq -r . 00:06:00.101 [2024-12-07 04:21:03.021193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.101 [2024-12-07 04:21:03.021276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56425 ] 00:06:00.101 [2024-12-07 04:21:03.149235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.101 [2024-12-07 04:21:03.199349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.479 04:21:04 -- accel/accel.sh@18 -- # out=' 00:06:01.479 SPDK Configuration: 00:06:01.479 Core mask: 0x1 00:06:01.479 00:06:01.479 Accel Perf Configuration: 00:06:01.479 Workload Type: dualcast 00:06:01.479 Transfer size: 4096 bytes 00:06:01.479 Vector count 1 00:06:01.479 Module: software 00:06:01.479 Queue depth: 32 00:06:01.479 Allocate depth: 32 00:06:01.479 # threads/core: 1 00:06:01.479 Run time: 1 seconds 00:06:01.479 Verify: Yes 00:06:01.479 00:06:01.479 Running for 1 seconds... 00:06:01.479 00:06:01.479 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.479 ------------------------------------------------------------------------------------ 00:06:01.479 0,0 392224/s 1532 MiB/s 0 0 00:06:01.479 ==================================================================================== 00:06:01.479 Total 392224/s 1532 MiB/s 0 0' 00:06:01.479 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.479 04:21:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:01.479 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.479 04:21:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:01.479 04:21:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.479 04:21:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.479 04:21:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.479 04:21:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.479 04:21:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.479 04:21:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.479 04:21:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.479 04:21:04 -- accel/accel.sh@42 -- # jq -r . 00:06:01.479 [2024-12-07 04:21:04.372762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.479 [2024-12-07 04:21:04.373011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56445 ] 00:06:01.479 [2024-12-07 04:21:04.500253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.479 [2024-12-07 04:21:04.549029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.479 04:21:04 -- accel/accel.sh@21 -- # val= 00:06:01.479 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.479 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.479 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.479 04:21:04 -- accel/accel.sh@21 -- # val= 00:06:01.479 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.479 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.479 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.479 04:21:04 -- accel/accel.sh@21 -- # val=0x1 00:06:01.479 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.479 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val= 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val= 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val=dualcast 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val= 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val=software 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val=32 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val=32 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val=1 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val=Yes 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val= 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:01.480 04:21:04 -- accel/accel.sh@21 -- # val= 00:06:01.480 04:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:01.480 04:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:02.851 04:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.851 04:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.851 04:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.851 04:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.851 04:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.851 04:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.851 04:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.851 04:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.851 04:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.851 04:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.851 04:21:05 -- accel/accel.sh@21 -- # val= 00:06:02.851 04:21:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # IFS=: 00:06:02.851 04:21:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.851 04:21:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.851 04:21:05 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:02.851 04:21:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.851 00:06:02.851 real 0m2.723s 00:06:02.851 user 0m2.379s 00:06:02.851 sys 0m0.141s 00:06:02.851 ************************************ 00:06:02.851 END TEST accel_dualcast 00:06:02.851 ************************************ 00:06:02.851 04:21:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.851 04:21:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.851 04:21:05 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:02.851 04:21:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:02.851 04:21:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.851 04:21:05 -- common/autotest_common.sh@10 -- # set +x 00:06:02.851 ************************************ 00:06:02.851 START TEST accel_compare 00:06:02.851 ************************************ 00:06:02.851 04:21:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:02.851 04:21:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.851 04:21:05 -- accel/accel.sh@17 -- # local accel_module 00:06:02.851 04:21:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:02.851 04:21:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:02.851 04:21:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.851 04:21:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.851 04:21:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.851 04:21:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.851 04:21:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.851 04:21:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.851 04:21:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.851 04:21:05 -- accel/accel.sh@42 -- # jq -r . 00:06:02.852 [2024-12-07 04:21:05.802422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.852 [2024-12-07 04:21:05.803004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56479 ] 00:06:02.852 [2024-12-07 04:21:05.939962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.852 [2024-12-07 04:21:05.987193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.225 04:21:07 -- accel/accel.sh@18 -- # out=' 00:06:04.225 SPDK Configuration: 00:06:04.225 Core mask: 0x1 00:06:04.225 00:06:04.225 Accel Perf Configuration: 00:06:04.225 Workload Type: compare 00:06:04.225 Transfer size: 4096 bytes 00:06:04.225 Vector count 1 00:06:04.225 Module: software 00:06:04.225 Queue depth: 32 00:06:04.225 Allocate depth: 32 00:06:04.225 # threads/core: 1 00:06:04.225 Run time: 1 seconds 00:06:04.225 Verify: Yes 00:06:04.225 00:06:04.225 Running for 1 seconds... 00:06:04.225 00:06:04.225 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.225 ------------------------------------------------------------------------------------ 00:06:04.225 0,0 533728/s 2084 MiB/s 0 0 00:06:04.225 ==================================================================================== 00:06:04.225 Total 533728/s 2084 MiB/s 0 0' 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.225 04:21:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:04.225 04:21:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:04.225 04:21:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.225 04:21:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.225 04:21:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.225 04:21:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.225 04:21:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.225 04:21:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.225 04:21:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.225 04:21:07 -- accel/accel.sh@42 -- # jq -r . 00:06:04.225 [2024-12-07 04:21:07.160087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.225 [2024-12-07 04:21:07.160178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56493 ] 00:06:04.225 [2024-12-07 04:21:07.295763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.225 [2024-12-07 04:21:07.343585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.225 04:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.225 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.225 04:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.225 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.225 04:21:07 -- accel/accel.sh@21 -- # val=0x1 00:06:04.225 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.225 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.225 04:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val=compare 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val=software 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val=32 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val=32 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val=1 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val=Yes 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.226 04:21:07 -- accel/accel.sh@21 -- # val= 00:06:04.226 04:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.226 04:21:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.598 04:21:08 -- accel/accel.sh@21 -- # val= 00:06:05.598 04:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # IFS=: 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # read -r var val 00:06:05.598 04:21:08 -- accel/accel.sh@21 -- # val= 00:06:05.598 04:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # IFS=: 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # read -r var val 00:06:05.598 04:21:08 -- accel/accel.sh@21 -- # val= 00:06:05.598 04:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # IFS=: 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # read -r var val 00:06:05.598 04:21:08 -- accel/accel.sh@21 -- # val= 00:06:05.598 ************************************ 00:06:05.598 END TEST accel_compare 00:06:05.598 ************************************ 00:06:05.598 04:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # IFS=: 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # read -r var val 00:06:05.598 04:21:08 -- accel/accel.sh@21 -- # val= 00:06:05.598 04:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # IFS=: 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # read -r var val 00:06:05.598 04:21:08 -- accel/accel.sh@21 -- # val= 00:06:05.598 04:21:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # IFS=: 00:06:05.598 04:21:08 -- accel/accel.sh@20 -- # read -r var val 00:06:05.598 04:21:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.598 04:21:08 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:05.598 04:21:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.598 00:06:05.598 real 0m2.719s 00:06:05.598 user 0m2.377s 00:06:05.598 sys 0m0.135s 00:06:05.598 04:21:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.598 04:21:08 -- common/autotest_common.sh@10 -- # set +x 00:06:05.598 04:21:08 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:05.598 04:21:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.598 04:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.598 04:21:08 -- common/autotest_common.sh@10 -- # set +x 00:06:05.598 ************************************ 00:06:05.598 START TEST accel_xor 00:06:05.598 ************************************ 00:06:05.598 04:21:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:05.598 04:21:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.598 04:21:08 -- accel/accel.sh@17 -- # local accel_module 00:06:05.598 04:21:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:05.598 04:21:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:05.598 04:21:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.598 04:21:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.598 04:21:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.598 04:21:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.598 04:21:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.598 04:21:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.598 04:21:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.599 04:21:08 -- accel/accel.sh@42 -- # jq -r . 00:06:05.599 [2024-12-07 04:21:08.573787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.599 [2024-12-07 04:21:08.573875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56528 ] 00:06:05.599 [2024-12-07 04:21:08.711723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.599 [2024-12-07 04:21:08.768710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.975 04:21:09 -- accel/accel.sh@18 -- # out=' 00:06:06.975 SPDK Configuration: 00:06:06.975 Core mask: 0x1 00:06:06.975 00:06:06.975 Accel Perf Configuration: 00:06:06.975 Workload Type: xor 00:06:06.975 Source buffers: 2 00:06:06.975 Transfer size: 4096 bytes 00:06:06.975 Vector count 1 00:06:06.975 Module: software 00:06:06.975 Queue depth: 32 00:06:06.975 Allocate depth: 32 00:06:06.975 # threads/core: 1 00:06:06.975 Run time: 1 seconds 00:06:06.975 Verify: Yes 00:06:06.975 00:06:06.975 Running for 1 seconds... 00:06:06.975 00:06:06.975 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.975 ------------------------------------------------------------------------------------ 00:06:06.975 0,0 282464/s 1103 MiB/s 0 0 00:06:06.975 ==================================================================================== 00:06:06.975 Total 282464/s 1103 MiB/s 0 0' 00:06:06.975 04:21:09 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:06.975 04:21:09 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:06.975 04:21:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.975 04:21:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.975 04:21:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.975 04:21:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.975 04:21:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.975 04:21:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.975 04:21:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.975 04:21:09 -- accel/accel.sh@42 -- # jq -r . 00:06:06.975 [2024-12-07 04:21:09.946444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.975 [2024-12-07 04:21:09.946527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56547 ] 00:06:06.975 [2024-12-07 04:21:10.078072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.975 [2024-12-07 04:21:10.125736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val= 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val= 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=0x1 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val= 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val= 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=xor 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=2 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val= 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=software 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=32 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=32 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=1 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val=Yes 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val= 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.975 04:21:10 -- accel/accel.sh@21 -- # val= 00:06:06.975 04:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.975 04:21:10 -- accel/accel.sh@20 -- # read -r var val 00:06:08.355 04:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.355 04:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.355 04:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.355 04:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.355 04:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.355 04:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.355 04:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.355 ************************************ 00:06:08.355 END TEST accel_xor 00:06:08.355 ************************************ 00:06:08.355 04:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.355 04:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.355 04:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.355 04:21:11 -- accel/accel.sh@21 -- # val= 00:06:08.355 04:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.355 04:21:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.355 04:21:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.355 04:21:11 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:08.355 04:21:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.355 00:06:08.355 real 0m2.737s 00:06:08.355 user 0m2.396s 00:06:08.355 sys 0m0.139s 00:06:08.355 04:21:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.355 04:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:08.355 04:21:11 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:08.355 04:21:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:08.355 04:21:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.355 04:21:11 -- common/autotest_common.sh@10 -- # set +x 00:06:08.355 ************************************ 00:06:08.356 START TEST accel_xor 00:06:08.356 ************************************ 00:06:08.356 04:21:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:08.356 04:21:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.356 04:21:11 -- accel/accel.sh@17 -- # local accel_module 00:06:08.356 04:21:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:08.356 04:21:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:08.356 04:21:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.356 04:21:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.356 04:21:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.356 04:21:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.356 04:21:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.356 04:21:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.356 04:21:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.356 04:21:11 -- accel/accel.sh@42 -- # jq -r . 00:06:08.356 [2024-12-07 04:21:11.365925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.356 [2024-12-07 04:21:11.366248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56582 ] 00:06:08.356 [2024-12-07 04:21:11.493535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.356 [2024-12-07 04:21:11.541336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.734 04:21:12 -- accel/accel.sh@18 -- # out=' 00:06:09.734 SPDK Configuration: 00:06:09.734 Core mask: 0x1 00:06:09.734 00:06:09.734 Accel Perf Configuration: 00:06:09.734 Workload Type: xor 00:06:09.734 Source buffers: 3 00:06:09.734 Transfer size: 4096 bytes 00:06:09.734 Vector count 1 00:06:09.734 Module: software 00:06:09.734 Queue depth: 32 00:06:09.734 Allocate depth: 32 00:06:09.734 # threads/core: 1 00:06:09.734 Run time: 1 seconds 00:06:09.734 Verify: Yes 00:06:09.734 00:06:09.734 Running for 1 seconds... 00:06:09.734 00:06:09.734 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.734 ------------------------------------------------------------------------------------ 00:06:09.734 0,0 265376/s 1036 MiB/s 0 0 00:06:09.734 ==================================================================================== 00:06:09.734 Total 265376/s 1036 MiB/s 0 0' 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:09.734 04:21:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:09.734 04:21:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.734 04:21:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.734 04:21:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.734 04:21:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.734 04:21:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.734 04:21:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.734 04:21:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.734 04:21:12 -- accel/accel.sh@42 -- # jq -r . 00:06:09.734 [2024-12-07 04:21:12.717420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.734 [2024-12-07 04:21:12.717509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56596 ] 00:06:09.734 [2024-12-07 04:21:12.851593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.734 [2024-12-07 04:21:12.899211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val= 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val= 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=0x1 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val= 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val= 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=xor 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=3 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val= 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=software 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=32 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=32 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=1 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val=Yes 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val= 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:09.734 04:21:12 -- accel/accel.sh@21 -- # val= 00:06:09.734 04:21:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # IFS=: 00:06:09.734 04:21:12 -- accel/accel.sh@20 -- # read -r var val 00:06:11.111 04:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.111 04:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.111 04:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.111 04:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.111 04:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.111 04:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.111 04:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.111 04:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.111 04:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.111 04:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.111 04:21:14 -- accel/accel.sh@21 -- # val= 00:06:11.111 04:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # IFS=: 00:06:11.111 04:21:14 -- accel/accel.sh@20 -- # read -r var val 00:06:11.111 04:21:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.111 04:21:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:11.111 04:21:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.111 00:06:11.111 real 0m2.728s 00:06:11.111 user 0m2.383s 00:06:11.111 sys 0m0.139s 00:06:11.111 04:21:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.111 04:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:11.111 ************************************ 00:06:11.111 END TEST accel_xor 00:06:11.111 ************************************ 00:06:11.111 04:21:14 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:11.111 04:21:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:11.111 04:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.111 04:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:11.111 ************************************ 00:06:11.111 START TEST accel_dif_verify 00:06:11.111 ************************************ 00:06:11.111 04:21:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:11.111 04:21:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.111 04:21:14 -- accel/accel.sh@17 -- # local accel_module 00:06:11.111 04:21:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:11.111 04:21:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:11.111 04:21:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.111 04:21:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.111 04:21:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.111 04:21:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.111 04:21:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.111 04:21:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.111 04:21:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.111 04:21:14 -- accel/accel.sh@42 -- # jq -r . 00:06:11.111 [2024-12-07 04:21:14.141288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.111 [2024-12-07 04:21:14.141376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56630 ] 00:06:11.111 [2024-12-07 04:21:14.273480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.111 [2024-12-07 04:21:14.324509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.484 04:21:15 -- accel/accel.sh@18 -- # out=' 00:06:12.484 SPDK Configuration: 00:06:12.485 Core mask: 0x1 00:06:12.485 00:06:12.485 Accel Perf Configuration: 00:06:12.485 Workload Type: dif_verify 00:06:12.485 Vector size: 4096 bytes 00:06:12.485 Transfer size: 4096 bytes 00:06:12.485 Block size: 512 bytes 00:06:12.485 Metadata size: 8 bytes 00:06:12.485 Vector count 1 00:06:12.485 Module: software 00:06:12.485 Queue depth: 32 00:06:12.485 Allocate depth: 32 00:06:12.485 # threads/core: 1 00:06:12.485 Run time: 1 seconds 00:06:12.485 Verify: No 00:06:12.485 00:06:12.485 Running for 1 seconds... 00:06:12.485 00:06:12.485 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.485 ------------------------------------------------------------------------------------ 00:06:12.485 0,0 118656/s 470 MiB/s 0 0 00:06:12.485 ==================================================================================== 00:06:12.485 Total 118656/s 463 MiB/s 0 0' 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:12.485 04:21:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:12.485 04:21:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.485 04:21:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.485 04:21:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.485 04:21:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.485 04:21:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.485 04:21:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.485 04:21:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.485 04:21:15 -- accel/accel.sh@42 -- # jq -r . 00:06:12.485 [2024-12-07 04:21:15.503416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.485 [2024-12-07 04:21:15.503680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56650 ] 00:06:12.485 [2024-12-07 04:21:15.639212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.485 [2024-12-07 04:21:15.686034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.485 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.485 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val=0x1 00:06:12.485 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.485 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.485 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val=dif_verify 00:06:12.485 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.485 04:21:15 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.485 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.485 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.485 04:21:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.742 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.742 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.742 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.742 04:21:15 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:12.742 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.742 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.742 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.742 04:21:15 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val=software 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val=32 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val=32 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val=1 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val=No 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.743 04:21:15 -- accel/accel.sh@21 -- # val= 00:06:12.743 04:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.743 04:21:15 -- accel/accel.sh@20 -- # read -r var val 00:06:13.678 04:21:16 -- accel/accel.sh@21 -- # val= 00:06:13.678 04:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.678 04:21:16 -- accel/accel.sh@21 -- # val= 00:06:13.678 04:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.678 04:21:16 -- accel/accel.sh@21 -- # val= 00:06:13.678 04:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.678 04:21:16 -- accel/accel.sh@21 -- # val= 00:06:13.678 04:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.678 04:21:16 -- accel/accel.sh@21 -- # val= 00:06:13.678 04:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.678 ************************************ 00:06:13.678 END TEST accel_dif_verify 00:06:13.678 ************************************ 00:06:13.678 04:21:16 -- accel/accel.sh@21 -- # val= 00:06:13.678 04:21:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.678 04:21:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.678 04:21:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.678 04:21:16 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:13.678 04:21:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.678 00:06:13.678 real 0m2.721s 00:06:13.678 user 0m2.371s 00:06:13.678 sys 0m0.150s 00:06:13.678 04:21:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.678 04:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:13.678 04:21:16 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:13.678 04:21:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:13.678 04:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.678 04:21:16 -- common/autotest_common.sh@10 -- # set +x 00:06:13.678 ************************************ 00:06:13.678 START TEST accel_dif_generate 00:06:13.678 ************************************ 00:06:13.678 04:21:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:13.678 04:21:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.678 04:21:16 -- accel/accel.sh@17 -- # local accel_module 00:06:13.678 04:21:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:13.678 04:21:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:13.678 04:21:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.678 04:21:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.678 04:21:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.678 04:21:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.678 04:21:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.678 04:21:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.678 04:21:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.678 04:21:16 -- accel/accel.sh@42 -- # jq -r . 00:06:13.678 [2024-12-07 04:21:16.916503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.678 [2024-12-07 04:21:16.916591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56679 ] 00:06:13.937 [2024-12-07 04:21:17.050416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.937 [2024-12-07 04:21:17.098177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.309 04:21:18 -- accel/accel.sh@18 -- # out=' 00:06:15.309 SPDK Configuration: 00:06:15.309 Core mask: 0x1 00:06:15.309 00:06:15.309 Accel Perf Configuration: 00:06:15.309 Workload Type: dif_generate 00:06:15.309 Vector size: 4096 bytes 00:06:15.309 Transfer size: 4096 bytes 00:06:15.309 Block size: 512 bytes 00:06:15.309 Metadata size: 8 bytes 00:06:15.309 Vector count 1 00:06:15.309 Module: software 00:06:15.309 Queue depth: 32 00:06:15.309 Allocate depth: 32 00:06:15.309 # threads/core: 1 00:06:15.309 Run time: 1 seconds 00:06:15.309 Verify: No 00:06:15.309 00:06:15.309 Running for 1 seconds... 00:06:15.309 00:06:15.309 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.309 ------------------------------------------------------------------------------------ 00:06:15.309 0,0 143680/s 570 MiB/s 0 0 00:06:15.309 ==================================================================================== 00:06:15.309 Total 143680/s 561 MiB/s 0 0' 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:15.309 04:21:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:15.309 04:21:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.309 04:21:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.309 04:21:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.309 04:21:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.309 04:21:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.309 04:21:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.309 04:21:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.309 04:21:18 -- accel/accel.sh@42 -- # jq -r . 00:06:15.309 [2024-12-07 04:21:18.281881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.309 [2024-12-07 04:21:18.282167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56698 ] 00:06:15.309 [2024-12-07 04:21:18.411529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.309 [2024-12-07 04:21:18.459999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val=0x1 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val=dif_generate 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val=software 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val=32 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.309 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.309 04:21:18 -- accel/accel.sh@21 -- # val=32 00:06:15.309 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.310 04:21:18 -- accel/accel.sh@21 -- # val=1 00:06:15.310 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.310 04:21:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.310 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.310 04:21:18 -- accel/accel.sh@21 -- # val=No 00:06:15.310 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.310 04:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.310 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.310 04:21:18 -- accel/accel.sh@21 -- # val= 00:06:15.310 04:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.310 04:21:18 -- accel/accel.sh@20 -- # read -r var val 00:06:16.717 04:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.717 04:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.717 04:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.717 04:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.717 04:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.717 04:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.717 04:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.717 04:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.717 04:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.717 04:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.717 04:21:19 -- accel/accel.sh@21 -- # val= 00:06:16.717 04:21:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.717 04:21:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.717 04:21:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.717 04:21:19 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:16.717 04:21:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.717 00:06:16.717 real 0m2.722s 00:06:16.717 user 0m2.387s 00:06:16.717 sys 0m0.137s 00:06:16.717 04:21:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.717 ************************************ 00:06:16.717 END TEST accel_dif_generate 00:06:16.717 ************************************ 00:06:16.717 04:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.718 04:21:19 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:16.718 04:21:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:16.718 04:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.718 04:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.718 ************************************ 00:06:16.718 START TEST accel_dif_generate_copy 00:06:16.718 ************************************ 00:06:16.718 04:21:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:16.718 04:21:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.718 04:21:19 -- accel/accel.sh@17 -- # local accel_module 00:06:16.718 04:21:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:16.718 04:21:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:16.718 04:21:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.718 04:21:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.718 04:21:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.718 04:21:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.718 04:21:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.718 04:21:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.718 04:21:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.718 04:21:19 -- accel/accel.sh@42 -- # jq -r . 00:06:16.718 [2024-12-07 04:21:19.687709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.718 [2024-12-07 04:21:19.687940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56733 ] 00:06:16.718 [2024-12-07 04:21:19.824010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.718 [2024-12-07 04:21:19.871146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.093 04:21:21 -- accel/accel.sh@18 -- # out=' 00:06:18.093 SPDK Configuration: 00:06:18.093 Core mask: 0x1 00:06:18.093 00:06:18.093 Accel Perf Configuration: 00:06:18.093 Workload Type: dif_generate_copy 00:06:18.093 Vector size: 4096 bytes 00:06:18.093 Transfer size: 4096 bytes 00:06:18.093 Vector count 1 00:06:18.093 Module: software 00:06:18.093 Queue depth: 32 00:06:18.093 Allocate depth: 32 00:06:18.093 # threads/core: 1 00:06:18.093 Run time: 1 seconds 00:06:18.093 Verify: No 00:06:18.093 00:06:18.093 Running for 1 seconds... 00:06:18.093 00:06:18.093 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.093 ------------------------------------------------------------------------------------ 00:06:18.093 0,0 109472/s 434 MiB/s 0 0 00:06:18.093 ==================================================================================== 00:06:18.093 Total 109472/s 427 MiB/s 0 0' 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.093 04:21:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:18.093 04:21:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:18.093 04:21:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.093 04:21:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.093 04:21:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.093 04:21:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.093 04:21:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.093 04:21:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.093 04:21:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.093 04:21:21 -- accel/accel.sh@42 -- # jq -r . 00:06:18.093 [2024-12-07 04:21:21.049160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.093 [2024-12-07 04:21:21.049249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56747 ] 00:06:18.093 [2024-12-07 04:21:21.183271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.093 [2024-12-07 04:21:21.230014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.093 04:21:21 -- accel/accel.sh@21 -- # val= 00:06:18.093 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.093 04:21:21 -- accel/accel.sh@21 -- # val= 00:06:18.093 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.093 04:21:21 -- accel/accel.sh@21 -- # val=0x1 00:06:18.093 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.093 04:21:21 -- accel/accel.sh@21 -- # val= 00:06:18.093 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.093 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.093 04:21:21 -- accel/accel.sh@21 -- # val= 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val= 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val=software 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val=32 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val=32 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val=1 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val=No 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val= 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.094 04:21:21 -- accel/accel.sh@21 -- # val= 00:06:18.094 04:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # IFS=: 00:06:18.094 04:21:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.471 04:21:22 -- accel/accel.sh@21 -- # val= 00:06:19.471 04:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.471 04:21:22 -- accel/accel.sh@21 -- # val= 00:06:19.471 04:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.471 04:21:22 -- accel/accel.sh@21 -- # val= 00:06:19.471 04:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.471 04:21:22 -- accel/accel.sh@21 -- # val= 00:06:19.471 04:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.471 04:21:22 -- accel/accel.sh@21 -- # val= 00:06:19.471 04:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.471 04:21:22 -- accel/accel.sh@21 -- # val= 00:06:19.471 04:21:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.471 04:21:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.472 04:21:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.472 04:21:22 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:19.472 04:21:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.472 00:06:19.472 real 0m2.722s 00:06:19.472 user 0m2.381s 00:06:19.472 sys 0m0.141s 00:06:19.472 ************************************ 00:06:19.472 END TEST accel_dif_generate_copy 00:06:19.472 ************************************ 00:06:19.472 04:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.472 04:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.472 04:21:22 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:19.472 04:21:22 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.472 04:21:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:19.472 04:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.472 04:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.472 ************************************ 00:06:19.472 START TEST accel_comp 00:06:19.472 ************************************ 00:06:19.472 04:21:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.472 04:21:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.472 04:21:22 -- accel/accel.sh@17 -- # local accel_module 00:06:19.472 04:21:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.472 04:21:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.472 04:21:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.472 04:21:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.472 04:21:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.472 04:21:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.472 04:21:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.472 04:21:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.472 04:21:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.472 04:21:22 -- accel/accel.sh@42 -- # jq -r . 00:06:19.472 [2024-12-07 04:21:22.452081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.472 [2024-12-07 04:21:22.452162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56781 ] 00:06:19.472 [2024-12-07 04:21:22.580896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.472 [2024-12-07 04:21:22.628340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.847 04:21:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:20.847 00:06:20.847 SPDK Configuration: 00:06:20.847 Core mask: 0x1 00:06:20.847 00:06:20.847 Accel Perf Configuration: 00:06:20.847 Workload Type: compress 00:06:20.847 Transfer size: 4096 bytes 00:06:20.847 Vector count 1 00:06:20.847 Module: software 00:06:20.847 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.847 Queue depth: 32 00:06:20.847 Allocate depth: 32 00:06:20.847 # threads/core: 1 00:06:20.847 Run time: 1 seconds 00:06:20.847 Verify: No 00:06:20.847 00:06:20.847 Running for 1 seconds... 00:06:20.847 00:06:20.847 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.847 ------------------------------------------------------------------------------------ 00:06:20.847 0,0 56000/s 233 MiB/s 0 0 00:06:20.847 ==================================================================================== 00:06:20.847 Total 56000/s 218 MiB/s 0 0' 00:06:20.847 04:21:23 -- accel/accel.sh@20 -- # IFS=: 00:06:20.847 04:21:23 -- accel/accel.sh@20 -- # read -r var val 00:06:20.847 04:21:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.847 04:21:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.847 04:21:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.847 04:21:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.847 04:21:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.847 04:21:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.847 04:21:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.847 04:21:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.847 04:21:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.847 04:21:23 -- accel/accel.sh@42 -- # jq -r . 00:06:20.847 [2024-12-07 04:21:23.804357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.847 [2024-12-07 04:21:23.804445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56801 ] 00:06:20.847 [2024-12-07 04:21:23.938631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.847 [2024-12-07 04:21:23.985758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.847 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.847 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.847 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.847 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=0x1 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=compress 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=software 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=32 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=32 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=1 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val=No 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.848 04:21:24 -- accel/accel.sh@21 -- # val= 00:06:20.848 04:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.848 04:21:24 -- accel/accel.sh@20 -- # read -r var val 00:06:22.224 04:21:25 -- accel/accel.sh@21 -- # val= 00:06:22.224 04:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # IFS=: 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # read -r var val 00:06:22.224 04:21:25 -- accel/accel.sh@21 -- # val= 00:06:22.224 04:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # IFS=: 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # read -r var val 00:06:22.224 04:21:25 -- accel/accel.sh@21 -- # val= 00:06:22.224 04:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # IFS=: 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # read -r var val 00:06:22.224 04:21:25 -- accel/accel.sh@21 -- # val= 00:06:22.224 04:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # IFS=: 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # read -r var val 00:06:22.224 04:21:25 -- accel/accel.sh@21 -- # val= 00:06:22.224 04:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # IFS=: 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # read -r var val 00:06:22.224 04:21:25 -- accel/accel.sh@21 -- # val= 00:06:22.224 04:21:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # IFS=: 00:06:22.224 04:21:25 -- accel/accel.sh@20 -- # read -r var val 00:06:22.224 04:21:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.224 04:21:25 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:22.224 04:21:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.224 00:06:22.224 real 0m2.715s 00:06:22.224 user 0m2.379s 00:06:22.224 sys 0m0.134s 00:06:22.224 04:21:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.224 04:21:25 -- common/autotest_common.sh@10 -- # set +x 00:06:22.224 ************************************ 00:06:22.224 END TEST accel_comp 00:06:22.224 ************************************ 00:06:22.224 04:21:25 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.224 04:21:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:22.224 04:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.224 04:21:25 -- common/autotest_common.sh@10 -- # set +x 00:06:22.224 ************************************ 00:06:22.224 START TEST accel_decomp 00:06:22.224 ************************************ 00:06:22.224 04:21:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.224 04:21:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.224 04:21:25 -- accel/accel.sh@17 -- # local accel_module 00:06:22.224 04:21:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.224 04:21:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.224 04:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.224 04:21:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.224 04:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.224 04:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.224 04:21:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.224 04:21:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.224 04:21:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.224 04:21:25 -- accel/accel.sh@42 -- # jq -r . 00:06:22.224 [2024-12-07 04:21:25.220484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.224 [2024-12-07 04:21:25.220584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56830 ] 00:06:22.224 [2024-12-07 04:21:25.354870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.224 [2024-12-07 04:21:25.402328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.630 04:21:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:23.630 00:06:23.630 SPDK Configuration: 00:06:23.630 Core mask: 0x1 00:06:23.630 00:06:23.630 Accel Perf Configuration: 00:06:23.630 Workload Type: decompress 00:06:23.630 Transfer size: 4096 bytes 00:06:23.630 Vector count 1 00:06:23.630 Module: software 00:06:23.630 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.630 Queue depth: 32 00:06:23.630 Allocate depth: 32 00:06:23.630 # threads/core: 1 00:06:23.630 Run time: 1 seconds 00:06:23.630 Verify: Yes 00:06:23.630 00:06:23.630 Running for 1 seconds... 00:06:23.630 00:06:23.630 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.630 ------------------------------------------------------------------------------------ 00:06:23.630 0,0 76576/s 141 MiB/s 0 0 00:06:23.630 ==================================================================================== 00:06:23.630 Total 76576/s 299 MiB/s 0 0' 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:23.630 04:21:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:23.630 04:21:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.630 04:21:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.630 04:21:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.630 04:21:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.630 04:21:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.630 04:21:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.630 04:21:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.630 04:21:26 -- accel/accel.sh@42 -- # jq -r . 00:06:23.630 [2024-12-07 04:21:26.577730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.630 [2024-12-07 04:21:26.577815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56846 ] 00:06:23.630 [2024-12-07 04:21:26.715869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.630 [2024-12-07 04:21:26.768381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val=0x1 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val=decompress 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.630 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.630 04:21:26 -- accel/accel.sh@21 -- # val=software 00:06:23.630 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.630 04:21:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val=32 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val=32 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val=1 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val=Yes 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.631 04:21:26 -- accel/accel.sh@21 -- # val= 00:06:23.631 04:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.631 04:21:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.005 04:21:27 -- accel/accel.sh@21 -- # val= 00:06:25.005 04:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.005 04:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:25.005 04:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:25.005 04:21:27 -- accel/accel.sh@21 -- # val= 00:06:25.005 04:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.005 04:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:25.005 04:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:25.005 04:21:27 -- accel/accel.sh@21 -- # val= 00:06:25.005 04:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.005 04:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:25.005 04:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:25.005 04:21:27 -- accel/accel.sh@21 -- # val= 00:06:25.005 04:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.005 04:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:25.006 04:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:25.006 04:21:27 -- accel/accel.sh@21 -- # val= 00:06:25.006 ************************************ 00:06:25.006 END TEST accel_decomp 00:06:25.006 ************************************ 00:06:25.006 04:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.006 04:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:25.006 04:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:25.006 04:21:27 -- accel/accel.sh@21 -- # val= 00:06:25.006 04:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.006 04:21:27 -- accel/accel.sh@20 -- # IFS=: 00:06:25.006 04:21:27 -- accel/accel.sh@20 -- # read -r var val 00:06:25.006 04:21:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.006 04:21:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:25.006 04:21:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.006 00:06:25.006 real 0m2.729s 00:06:25.006 user 0m2.393s 00:06:25.006 sys 0m0.133s 00:06:25.006 04:21:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.006 04:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.006 04:21:27 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.006 04:21:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:25.006 04:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.006 04:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.006 ************************************ 00:06:25.006 START TEST accel_decmop_full 00:06:25.006 ************************************ 00:06:25.006 04:21:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.006 04:21:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.006 04:21:27 -- accel/accel.sh@17 -- # local accel_module 00:06:25.006 04:21:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.006 04:21:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:25.006 04:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.006 04:21:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.006 04:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.006 04:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.006 04:21:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.006 04:21:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.006 04:21:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.006 04:21:27 -- accel/accel.sh@42 -- # jq -r . 00:06:25.006 [2024-12-07 04:21:28.007903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.006 [2024-12-07 04:21:28.008130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56884 ] 00:06:25.006 [2024-12-07 04:21:28.144611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.006 [2024-12-07 04:21:28.192252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.382 04:21:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:26.382 00:06:26.382 SPDK Configuration: 00:06:26.382 Core mask: 0x1 00:06:26.382 00:06:26.382 Accel Perf Configuration: 00:06:26.382 Workload Type: decompress 00:06:26.382 Transfer size: 111250 bytes 00:06:26.382 Vector count 1 00:06:26.382 Module: software 00:06:26.382 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.382 Queue depth: 32 00:06:26.382 Allocate depth: 32 00:06:26.382 # threads/core: 1 00:06:26.382 Run time: 1 seconds 00:06:26.382 Verify: Yes 00:06:26.382 00:06:26.382 Running for 1 seconds... 00:06:26.382 00:06:26.382 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.382 ------------------------------------------------------------------------------------ 00:06:26.382 0,0 5280/s 218 MiB/s 0 0 00:06:26.382 ==================================================================================== 00:06:26.382 Total 5280/s 560 MiB/s 0 0' 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:26.382 04:21:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.382 04:21:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.382 04:21:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.382 04:21:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.382 04:21:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.382 04:21:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.382 04:21:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.382 04:21:29 -- accel/accel.sh@42 -- # jq -r . 00:06:26.382 [2024-12-07 04:21:29.375606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.382 [2024-12-07 04:21:29.375918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56898 ] 00:06:26.382 [2024-12-07 04:21:29.507528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.382 [2024-12-07 04:21:29.554598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=0x1 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=decompress 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=software 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=32 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=32 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=1 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val=Yes 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:26.382 04:21:29 -- accel/accel.sh@21 -- # val= 00:06:26.382 04:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # IFS=: 00:06:26.382 04:21:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.759 04:21:30 -- accel/accel.sh@21 -- # val= 00:06:27.759 04:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.759 04:21:30 -- accel/accel.sh@21 -- # val= 00:06:27.759 04:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.759 04:21:30 -- accel/accel.sh@21 -- # val= 00:06:27.759 04:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.759 04:21:30 -- accel/accel.sh@21 -- # val= 00:06:27.759 04:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.759 ************************************ 00:06:27.759 END TEST accel_decmop_full 00:06:27.759 ************************************ 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.759 04:21:30 -- accel/accel.sh@21 -- # val= 00:06:27.759 04:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.759 04:21:30 -- accel/accel.sh@21 -- # val= 00:06:27.759 04:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.759 04:21:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.759 04:21:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.759 04:21:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:27.759 04:21:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.759 00:06:27.759 real 0m2.739s 00:06:27.759 user 0m2.396s 00:06:27.759 sys 0m0.142s 00:06:27.759 04:21:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.759 04:21:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.759 04:21:30 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.759 04:21:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:27.759 04:21:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.759 04:21:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.759 ************************************ 00:06:27.759 START TEST accel_decomp_mcore 00:06:27.759 ************************************ 00:06:27.759 04:21:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.759 04:21:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.759 04:21:30 -- accel/accel.sh@17 -- # local accel_module 00:06:27.759 04:21:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.759 04:21:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.759 04:21:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.759 04:21:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.759 04:21:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.759 04:21:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.759 04:21:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.759 04:21:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.759 04:21:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.759 04:21:30 -- accel/accel.sh@42 -- # jq -r . 00:06:27.759 [2024-12-07 04:21:30.797253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.759 [2024-12-07 04:21:30.797497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56937 ] 00:06:27.759 [2024-12-07 04:21:30.931140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.759 [2024-12-07 04:21:30.986880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.759 [2024-12-07 04:21:30.986971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.759 [2024-12-07 04:21:30.987104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.759 [2024-12-07 04:21:30.987107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.134 04:21:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:29.134 00:06:29.134 SPDK Configuration: 00:06:29.134 Core mask: 0xf 00:06:29.134 00:06:29.134 Accel Perf Configuration: 00:06:29.134 Workload Type: decompress 00:06:29.134 Transfer size: 4096 bytes 00:06:29.134 Vector count 1 00:06:29.134 Module: software 00:06:29.134 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.134 Queue depth: 32 00:06:29.134 Allocate depth: 32 00:06:29.134 # threads/core: 1 00:06:29.134 Run time: 1 seconds 00:06:29.134 Verify: Yes 00:06:29.134 00:06:29.134 Running for 1 seconds... 00:06:29.134 00:06:29.134 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.134 ------------------------------------------------------------------------------------ 00:06:29.134 0,0 65664/s 121 MiB/s 0 0 00:06:29.134 3,0 63072/s 116 MiB/s 0 0 00:06:29.134 2,0 60832/s 112 MiB/s 0 0 00:06:29.134 1,0 63008/s 116 MiB/s 0 0 00:06:29.134 ==================================================================================== 00:06:29.134 Total 252576/s 986 MiB/s 0 0' 00:06:29.134 04:21:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:29.134 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.134 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.134 04:21:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:29.134 04:21:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.134 04:21:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.134 04:21:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.134 04:21:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.134 04:21:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.134 04:21:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.134 04:21:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.134 04:21:32 -- accel/accel.sh@42 -- # jq -r . 00:06:29.134 [2024-12-07 04:21:32.166382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.134 [2024-12-07 04:21:32.166466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56955 ] 00:06:29.134 [2024-12-07 04:21:32.294471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.134 [2024-12-07 04:21:32.345193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.134 [2024-12-07 04:21:32.345311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.134 [2024-12-07 04:21:32.345450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.134 [2024-12-07 04:21:32.345453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=0xf 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=decompress 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=software 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=32 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=32 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=1 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val=Yes 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.394 04:21:32 -- accel/accel.sh@21 -- # val= 00:06:29.394 04:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.394 04:21:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 ************************************ 00:06:30.331 END TEST accel_decomp_mcore 00:06:30.331 ************************************ 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@21 -- # val= 00:06:30.331 04:21:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # IFS=: 00:06:30.331 04:21:33 -- accel/accel.sh@20 -- # read -r var val 00:06:30.331 04:21:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.331 04:21:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:30.331 04:21:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.331 00:06:30.331 real 0m2.739s 00:06:30.331 user 0m8.819s 00:06:30.331 sys 0m0.151s 00:06:30.331 04:21:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.331 04:21:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.331 04:21:33 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.331 04:21:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:30.331 04:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.331 04:21:33 -- common/autotest_common.sh@10 -- # set +x 00:06:30.331 ************************************ 00:06:30.331 START TEST accel_decomp_full_mcore 00:06:30.331 ************************************ 00:06:30.331 04:21:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.331 04:21:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.331 04:21:33 -- accel/accel.sh@17 -- # local accel_module 00:06:30.589 04:21:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.589 04:21:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.589 04:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.589 04:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.589 04:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.589 04:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.589 04:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.589 04:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.589 04:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.589 04:21:33 -- accel/accel.sh@42 -- # jq -r . 00:06:30.590 [2024-12-07 04:21:33.592723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.590 [2024-12-07 04:21:33.592969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56987 ] 00:06:30.590 [2024-12-07 04:21:33.730048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.590 [2024-12-07 04:21:33.799450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.590 [2024-12-07 04:21:33.799550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.590 [2024-12-07 04:21:33.799669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.590 [2024-12-07 04:21:33.799674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.966 04:21:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:31.966 00:06:31.966 SPDK Configuration: 00:06:31.966 Core mask: 0xf 00:06:31.966 00:06:31.966 Accel Perf Configuration: 00:06:31.966 Workload Type: decompress 00:06:31.966 Transfer size: 111250 bytes 00:06:31.966 Vector count 1 00:06:31.966 Module: software 00:06:31.966 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.966 Queue depth: 32 00:06:31.966 Allocate depth: 32 00:06:31.966 # threads/core: 1 00:06:31.966 Run time: 1 seconds 00:06:31.966 Verify: Yes 00:06:31.966 00:06:31.967 Running for 1 seconds... 00:06:31.967 00:06:31.967 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.967 ------------------------------------------------------------------------------------ 00:06:31.967 0,0 4864/s 200 MiB/s 0 0 00:06:31.967 3,0 4864/s 200 MiB/s 0 0 00:06:31.967 2,0 4864/s 200 MiB/s 0 0 00:06:31.967 1,0 4928/s 203 MiB/s 0 0 00:06:31.967 ==================================================================================== 00:06:31.967 Total 19520/s 2070 MiB/s 0 0' 00:06:31.967 04:21:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.967 04:21:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.967 04:21:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.967 04:21:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.967 04:21:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.967 04:21:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.967 04:21:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.967 04:21:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.967 04:21:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.967 04:21:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.967 04:21:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.967 04:21:34 -- accel/accel.sh@42 -- # jq -r . 00:06:31.967 [2024-12-07 04:21:34.992909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.967 [2024-12-07 04:21:34.993149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57010 ] 00:06:31.967 [2024-12-07 04:21:35.131993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.967 [2024-12-07 04:21:35.185535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.967 [2024-12-07 04:21:35.185629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.967 [2024-12-07 04:21:35.185729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.967 [2024-12-07 04:21:35.185731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=0xf 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=decompress 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=software 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=32 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=32 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=1 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val=Yes 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.227 04:21:35 -- accel/accel.sh@21 -- # val= 00:06:32.227 04:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # IFS=: 00:06:32.227 04:21:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.161 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.161 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.161 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.161 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.161 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.161 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.161 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.161 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.161 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.162 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.162 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.162 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.162 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.162 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.162 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.162 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.162 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.162 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.162 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.162 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.162 04:21:36 -- accel/accel.sh@21 -- # val= 00:06:33.162 ************************************ 00:06:33.162 END TEST accel_decomp_full_mcore 00:06:33.162 ************************************ 00:06:33.162 04:21:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.162 04:21:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.162 04:21:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.162 04:21:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:33.162 04:21:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.162 00:06:33.162 real 0m2.800s 00:06:33.162 user 0m8.918s 00:06:33.162 sys 0m0.164s 00:06:33.162 04:21:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.162 04:21:36 -- common/autotest_common.sh@10 -- # set +x 00:06:33.421 04:21:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.421 04:21:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:33.421 04:21:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.421 04:21:36 -- common/autotest_common.sh@10 -- # set +x 00:06:33.421 ************************************ 00:06:33.421 START TEST accel_decomp_mthread 00:06:33.421 ************************************ 00:06:33.421 04:21:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.421 04:21:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.421 04:21:36 -- accel/accel.sh@17 -- # local accel_module 00:06:33.421 04:21:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.421 04:21:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.421 04:21:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.421 04:21:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.421 04:21:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.421 04:21:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.421 04:21:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.421 04:21:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.421 04:21:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.421 04:21:36 -- accel/accel.sh@42 -- # jq -r . 00:06:33.421 [2024-12-07 04:21:36.444284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.421 [2024-12-07 04:21:36.444374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57047 ] 00:06:33.421 [2024-12-07 04:21:36.579613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.421 [2024-12-07 04:21:36.626797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.801 04:21:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:34.801 00:06:34.801 SPDK Configuration: 00:06:34.801 Core mask: 0x1 00:06:34.801 00:06:34.801 Accel Perf Configuration: 00:06:34.801 Workload Type: decompress 00:06:34.801 Transfer size: 4096 bytes 00:06:34.801 Vector count 1 00:06:34.801 Module: software 00:06:34.801 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.801 Queue depth: 32 00:06:34.801 Allocate depth: 32 00:06:34.801 # threads/core: 2 00:06:34.801 Run time: 1 seconds 00:06:34.801 Verify: Yes 00:06:34.801 00:06:34.801 Running for 1 seconds... 00:06:34.801 00:06:34.801 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.801 ------------------------------------------------------------------------------------ 00:06:34.801 0,1 40288/s 74 MiB/s 0 0 00:06:34.801 0,0 40128/s 73 MiB/s 0 0 00:06:34.801 ==================================================================================== 00:06:34.801 Total 80416/s 314 MiB/s 0 0' 00:06:34.801 04:21:37 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:37 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:34.801 04:21:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.801 04:21:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:34.801 04:21:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.801 04:21:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.801 04:21:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.801 04:21:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.801 04:21:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.801 04:21:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.801 04:21:37 -- accel/accel.sh@42 -- # jq -r . 00:06:34.801 [2024-12-07 04:21:37.803434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.801 [2024-12-07 04:21:37.804188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57066 ] 00:06:34.801 [2024-12-07 04:21:37.931975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.801 [2024-12-07 04:21:37.977935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val=0x1 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val=decompress 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.801 04:21:38 -- accel/accel.sh@21 -- # val=software 00:06:34.801 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.801 04:21:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.801 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val=32 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val=32 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val=2 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val=Yes 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:34.802 04:21:38 -- accel/accel.sh@21 -- # val= 00:06:34.802 04:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # IFS=: 00:06:34.802 04:21:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.181 04:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.181 04:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.181 04:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.181 04:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.181 04:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.181 04:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@21 -- # val= 00:06:36.181 04:21:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # IFS=: 00:06:36.181 04:21:39 -- accel/accel.sh@20 -- # read -r var val 00:06:36.181 04:21:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.181 04:21:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:36.181 04:21:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.181 00:06:36.181 real 0m2.716s 00:06:36.181 user 0m2.375s 00:06:36.181 sys 0m0.140s 00:06:36.181 04:21:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.181 04:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.181 ************************************ 00:06:36.181 END TEST accel_decomp_mthread 00:06:36.181 ************************************ 00:06:36.181 04:21:39 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.181 04:21:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:36.181 04:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.181 04:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.181 ************************************ 00:06:36.181 START TEST accel_deomp_full_mthread 00:06:36.181 ************************************ 00:06:36.181 04:21:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.181 04:21:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.181 04:21:39 -- accel/accel.sh@17 -- # local accel_module 00:06:36.181 04:21:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.181 04:21:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.181 04:21:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.181 04:21:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.181 04:21:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.181 04:21:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.181 04:21:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.181 04:21:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.181 04:21:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.181 04:21:39 -- accel/accel.sh@42 -- # jq -r . 00:06:36.181 [2024-12-07 04:21:39.210453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.181 [2024-12-07 04:21:39.210538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57098 ] 00:06:36.181 [2024-12-07 04:21:39.347769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.181 [2024-12-07 04:21:39.398832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.559 04:21:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:37.559 00:06:37.559 SPDK Configuration: 00:06:37.559 Core mask: 0x1 00:06:37.559 00:06:37.559 Accel Perf Configuration: 00:06:37.559 Workload Type: decompress 00:06:37.559 Transfer size: 111250 bytes 00:06:37.559 Vector count 1 00:06:37.559 Module: software 00:06:37.559 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.559 Queue depth: 32 00:06:37.559 Allocate depth: 32 00:06:37.559 # threads/core: 2 00:06:37.559 Run time: 1 seconds 00:06:37.559 Verify: Yes 00:06:37.559 00:06:37.559 Running for 1 seconds... 00:06:37.559 00:06:37.559 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.559 ------------------------------------------------------------------------------------ 00:06:37.559 0,1 2720/s 112 MiB/s 0 0 00:06:37.559 0,0 2720/s 112 MiB/s 0 0 00:06:37.559 ==================================================================================== 00:06:37.559 Total 5440/s 577 MiB/s 0 0' 00:06:37.559 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.559 04:21:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.559 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.559 04:21:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.559 04:21:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.559 04:21:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.559 04:21:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.559 04:21:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.559 04:21:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.559 04:21:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.559 04:21:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.559 04:21:40 -- accel/accel.sh@42 -- # jq -r . 00:06:37.559 [2024-12-07 04:21:40.601113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.559 [2024-12-07 04:21:40.601615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57118 ] 00:06:37.559 [2024-12-07 04:21:40.735062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.559 [2024-12-07 04:21:40.782265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=0x1 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=decompress 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=software 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=32 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=32 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=2 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val=Yes 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.818 04:21:40 -- accel/accel.sh@21 -- # val= 00:06:37.818 04:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.818 04:21:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.753 04:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.753 04:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.753 04:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.753 04:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.753 04:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.753 04:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@21 -- # val= 00:06:38.753 04:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.753 04:21:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.753 04:21:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.753 04:21:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:38.753 04:21:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.753 00:06:38.753 real 0m2.776s 00:06:38.753 user 0m2.433s 00:06:38.753 sys 0m0.144s 00:06:38.753 04:21:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.753 04:21:41 -- common/autotest_common.sh@10 -- # set +x 00:06:38.753 ************************************ 00:06:38.753 END TEST accel_deomp_full_mthread 00:06:38.753 ************************************ 00:06:39.011 04:21:42 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:39.011 04:21:42 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.011 04:21:42 -- accel/accel.sh@129 -- # build_accel_config 00:06:39.011 04:21:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.011 04:21:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:39.011 04:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.011 04:21:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.011 04:21:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.011 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.011 04:21:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.011 04:21:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.011 04:21:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.011 04:21:42 -- accel/accel.sh@42 -- # jq -r . 00:06:39.011 ************************************ 00:06:39.011 START TEST accel_dif_functional_tests 00:06:39.011 ************************************ 00:06:39.011 04:21:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.011 [2024-12-07 04:21:42.067147] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.011 [2024-12-07 04:21:42.067409] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57154 ] 00:06:39.011 [2024-12-07 04:21:42.202811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.270 [2024-12-07 04:21:42.251259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.270 [2024-12-07 04:21:42.251448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.270 [2024-12-07 04:21:42.251454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.270 00:06:39.270 00:06:39.270 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.270 http://cunit.sourceforge.net/ 00:06:39.270 00:06:39.270 00:06:39.270 Suite: accel_dif 00:06:39.270 Test: verify: DIF generated, GUARD check ...passed 00:06:39.270 Test: verify: DIF generated, APPTAG check ...passed 00:06:39.270 Test: verify: DIF generated, REFTAG check ...passed 00:06:39.270 Test: verify: DIF not generated, GUARD check ...passed 00:06:39.270 Test: verify: DIF not generated, APPTAG check ...[2024-12-07 04:21:42.299107] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:39.270 [2024-12-07 04:21:42.299210] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:39.270 [2024-12-07 04:21:42.299248] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:39.270 passed 00:06:39.270 Test: verify: DIF not generated, REFTAG check ...[2024-12-07 04:21:42.299294] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:39.270 passed 00:06:39.270 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:39.270 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-07 04:21:42.299324] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:39.270 [2024-12-07 04:21:42.299533] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:39.270 [2024-12-07 04:21:42.299599] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:39.270 passed 00:06:39.270 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:39.271 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:39.271 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:39.271 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:39.271 Test: generate copy: DIF generated, GUARD check ...passed 00:06:39.271 Test: generate copy: DIF generated, APTTAG check ...[2024-12-07 04:21:42.299832] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:39.271 passed 00:06:39.271 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:39.271 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:39.271 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:39.271 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:39.271 Test: generate copy: iovecs-len validate ...passed 00:06:39.271 Test: generate copy: buffer alignment validate ...[2024-12-07 04:21:42.300377] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:39.271 passed 00:06:39.271 00:06:39.271 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.271 suites 1 1 n/a 0 0 00:06:39.271 tests 20 20 20 0 0 00:06:39.271 asserts 204 204 204 0 n/a 00:06:39.271 00:06:39.271 Elapsed time = 0.003 seconds 00:06:39.271 00:06:39.271 real 0m0.447s 00:06:39.271 user 0m0.510s 00:06:39.271 sys 0m0.095s 00:06:39.271 04:21:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.271 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.271 ************************************ 00:06:39.271 END TEST accel_dif_functional_tests 00:06:39.271 ************************************ 00:06:39.271 00:06:39.271 real 0m58.848s 00:06:39.271 user 1m4.064s 00:06:39.271 sys 0m4.203s 00:06:39.271 04:21:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.271 ************************************ 00:06:39.271 END TEST accel 00:06:39.271 ************************************ 00:06:39.271 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.529 04:21:42 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:39.529 04:21:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.529 04:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.529 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.529 ************************************ 00:06:39.529 START TEST accel_rpc 00:06:39.529 ************************************ 00:06:39.529 04:21:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:39.529 * Looking for test storage... 00:06:39.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:39.529 04:21:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:39.529 04:21:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:39.529 04:21:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:39.529 04:21:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:39.529 04:21:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:39.529 04:21:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:39.529 04:21:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:39.529 04:21:42 -- scripts/common.sh@335 -- # IFS=.-: 00:06:39.530 04:21:42 -- scripts/common.sh@335 -- # read -ra ver1 00:06:39.530 04:21:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.530 04:21:42 -- scripts/common.sh@336 -- # read -ra ver2 00:06:39.530 04:21:42 -- scripts/common.sh@337 -- # local 'op=<' 00:06:39.530 04:21:42 -- scripts/common.sh@339 -- # ver1_l=2 00:06:39.530 04:21:42 -- scripts/common.sh@340 -- # ver2_l=1 00:06:39.530 04:21:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:39.530 04:21:42 -- scripts/common.sh@343 -- # case "$op" in 00:06:39.530 04:21:42 -- scripts/common.sh@344 -- # : 1 00:06:39.530 04:21:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:39.530 04:21:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.530 04:21:42 -- scripts/common.sh@364 -- # decimal 1 00:06:39.530 04:21:42 -- scripts/common.sh@352 -- # local d=1 00:06:39.530 04:21:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.530 04:21:42 -- scripts/common.sh@354 -- # echo 1 00:06:39.530 04:21:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:39.530 04:21:42 -- scripts/common.sh@365 -- # decimal 2 00:06:39.530 04:21:42 -- scripts/common.sh@352 -- # local d=2 00:06:39.530 04:21:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.530 04:21:42 -- scripts/common.sh@354 -- # echo 2 00:06:39.530 04:21:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:39.530 04:21:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:39.530 04:21:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:39.530 04:21:42 -- scripts/common.sh@367 -- # return 0 00:06:39.530 04:21:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.530 04:21:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.530 --rc genhtml_branch_coverage=1 00:06:39.530 --rc genhtml_function_coverage=1 00:06:39.530 --rc genhtml_legend=1 00:06:39.530 --rc geninfo_all_blocks=1 00:06:39.530 --rc geninfo_unexecuted_blocks=1 00:06:39.530 00:06:39.530 ' 00:06:39.530 04:21:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.530 --rc genhtml_branch_coverage=1 00:06:39.530 --rc genhtml_function_coverage=1 00:06:39.530 --rc genhtml_legend=1 00:06:39.530 --rc geninfo_all_blocks=1 00:06:39.530 --rc geninfo_unexecuted_blocks=1 00:06:39.530 00:06:39.530 ' 00:06:39.530 04:21:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.530 --rc genhtml_branch_coverage=1 00:06:39.530 --rc genhtml_function_coverage=1 00:06:39.530 --rc genhtml_legend=1 00:06:39.530 --rc geninfo_all_blocks=1 00:06:39.530 --rc geninfo_unexecuted_blocks=1 00:06:39.530 00:06:39.530 ' 00:06:39.530 04:21:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.530 --rc genhtml_branch_coverage=1 00:06:39.530 --rc genhtml_function_coverage=1 00:06:39.530 --rc genhtml_legend=1 00:06:39.530 --rc geninfo_all_blocks=1 00:06:39.530 --rc geninfo_unexecuted_blocks=1 00:06:39.530 00:06:39.530 ' 00:06:39.530 04:21:42 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:39.530 04:21:42 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57225 00:06:39.530 04:21:42 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:39.530 04:21:42 -- accel/accel_rpc.sh@15 -- # waitforlisten 57225 00:06:39.530 04:21:42 -- common/autotest_common.sh@829 -- # '[' -z 57225 ']' 00:06:39.530 04:21:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.530 04:21:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.530 04:21:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.530 04:21:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.530 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.788 [2024-12-07 04:21:42.790979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.788 [2024-12-07 04:21:42.791339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57225 ] 00:06:39.788 [2024-12-07 04:21:42.925118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.788 [2024-12-07 04:21:42.978091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.788 [2024-12-07 04:21:42.978274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.046 04:21:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.046 04:21:43 -- common/autotest_common.sh@862 -- # return 0 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:40.046 04:21:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.046 04:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.046 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.046 ************************************ 00:06:40.046 START TEST accel_assign_opcode 00:06:40.046 ************************************ 00:06:40.046 04:21:43 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:40.046 04:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.046 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.046 [2024-12-07 04:21:43.054697] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:40.046 04:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:40.046 04:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.046 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.046 [2024-12-07 04:21:43.062711] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:40.046 04:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:40.046 04:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.046 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.046 04:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:40.046 04:21:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.046 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.046 04:21:43 -- accel/accel_rpc.sh@42 -- # grep software 00:06:40.046 04:21:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.046 software 00:06:40.046 ************************************ 00:06:40.046 END TEST accel_assign_opcode 00:06:40.046 ************************************ 00:06:40.046 00:06:40.046 real 0m0.194s 00:06:40.046 user 0m0.059s 00:06:40.046 sys 0m0.008s 00:06:40.046 04:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.046 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.304 04:21:43 -- accel/accel_rpc.sh@55 -- # killprocess 57225 00:06:40.304 04:21:43 -- common/autotest_common.sh@936 -- # '[' -z 57225 ']' 00:06:40.304 04:21:43 -- common/autotest_common.sh@940 -- # kill -0 57225 00:06:40.304 04:21:43 -- common/autotest_common.sh@941 -- # uname 00:06:40.304 04:21:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.305 04:21:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57225 00:06:40.305 killing process with pid 57225 00:06:40.305 04:21:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.305 04:21:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.305 04:21:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57225' 00:06:40.305 04:21:43 -- common/autotest_common.sh@955 -- # kill 57225 00:06:40.305 04:21:43 -- common/autotest_common.sh@960 -- # wait 57225 00:06:40.563 ************************************ 00:06:40.563 END TEST accel_rpc 00:06:40.563 ************************************ 00:06:40.563 00:06:40.563 real 0m1.056s 00:06:40.563 user 0m1.083s 00:06:40.563 sys 0m0.311s 00:06:40.563 04:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.563 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.563 04:21:43 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:40.563 04:21:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.563 04:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.563 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.563 ************************************ 00:06:40.563 START TEST app_cmdline 00:06:40.563 ************************************ 00:06:40.563 04:21:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:40.563 * Looking for test storage... 00:06:40.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.563 04:21:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:40.563 04:21:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:40.563 04:21:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.822 04:21:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.822 04:21:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.822 04:21:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.822 04:21:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.822 04:21:43 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.822 04:21:43 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.822 04:21:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.822 04:21:43 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.822 04:21:43 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.822 04:21:43 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.822 04:21:43 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.822 04:21:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.822 04:21:43 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.822 04:21:43 -- scripts/common.sh@344 -- # : 1 00:06:40.822 04:21:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.822 04:21:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.822 04:21:43 -- scripts/common.sh@364 -- # decimal 1 00:06:40.822 04:21:43 -- scripts/common.sh@352 -- # local d=1 00:06:40.822 04:21:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.822 04:21:43 -- scripts/common.sh@354 -- # echo 1 00:06:40.822 04:21:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:40.822 04:21:43 -- scripts/common.sh@365 -- # decimal 2 00:06:40.822 04:21:43 -- scripts/common.sh@352 -- # local d=2 00:06:40.822 04:21:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.822 04:21:43 -- scripts/common.sh@354 -- # echo 2 00:06:40.822 04:21:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:40.823 04:21:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:40.823 04:21:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:40.823 04:21:43 -- scripts/common.sh@367 -- # return 0 00:06:40.823 04:21:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.823 04:21:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:40.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.823 --rc genhtml_branch_coverage=1 00:06:40.823 --rc genhtml_function_coverage=1 00:06:40.823 --rc genhtml_legend=1 00:06:40.823 --rc geninfo_all_blocks=1 00:06:40.823 --rc geninfo_unexecuted_blocks=1 00:06:40.823 00:06:40.823 ' 00:06:40.823 04:21:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:40.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.823 --rc genhtml_branch_coverage=1 00:06:40.823 --rc genhtml_function_coverage=1 00:06:40.823 --rc genhtml_legend=1 00:06:40.823 --rc geninfo_all_blocks=1 00:06:40.823 --rc geninfo_unexecuted_blocks=1 00:06:40.823 00:06:40.823 ' 00:06:40.823 04:21:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:40.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.823 --rc genhtml_branch_coverage=1 00:06:40.823 --rc genhtml_function_coverage=1 00:06:40.823 --rc genhtml_legend=1 00:06:40.823 --rc geninfo_all_blocks=1 00:06:40.823 --rc geninfo_unexecuted_blocks=1 00:06:40.823 00:06:40.823 ' 00:06:40.823 04:21:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:40.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.823 --rc genhtml_branch_coverage=1 00:06:40.823 --rc genhtml_function_coverage=1 00:06:40.823 --rc genhtml_legend=1 00:06:40.823 --rc geninfo_all_blocks=1 00:06:40.823 --rc geninfo_unexecuted_blocks=1 00:06:40.823 00:06:40.823 ' 00:06:40.823 04:21:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.823 04:21:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57312 00:06:40.823 04:21:43 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.823 04:21:43 -- app/cmdline.sh@18 -- # waitforlisten 57312 00:06:40.823 04:21:43 -- common/autotest_common.sh@829 -- # '[' -z 57312 ']' 00:06:40.823 04:21:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.823 04:21:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.823 04:21:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.823 04:21:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.823 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.823 [2024-12-07 04:21:43.885622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.823 [2024-12-07 04:21:43.885747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57312 ] 00:06:40.823 [2024-12-07 04:21:44.017589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.082 [2024-12-07 04:21:44.070019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.082 [2024-12-07 04:21:44.070215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.650 04:21:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.650 04:21:44 -- common/autotest_common.sh@862 -- # return 0 00:06:41.650 04:21:44 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:41.909 { 00:06:41.909 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:06:41.909 "fields": { 00:06:41.909 "major": 24, 00:06:41.909 "minor": 1, 00:06:41.909 "patch": 1, 00:06:41.909 "suffix": "-pre", 00:06:41.909 "commit": "c13c99a5e" 00:06:41.909 } 00:06:41.909 } 00:06:41.909 04:21:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.909 04:21:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.909 04:21:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.909 04:21:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.909 04:21:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.909 04:21:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.909 04:21:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.909 04:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:41.909 04:21:45 -- app/cmdline.sh@26 -- # sort 00:06:41.909 04:21:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.909 04:21:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.909 04:21:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.909 04:21:45 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.909 04:21:45 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.909 04:21:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.909 04:21:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.909 04:21:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.909 04:21:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.909 04:21:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.909 04:21:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.909 04:21:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.909 04:21:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:41.909 04:21:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:41.909 04:21:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.167 request: 00:06:42.167 { 00:06:42.167 "method": "env_dpdk_get_mem_stats", 00:06:42.167 "req_id": 1 00:06:42.167 } 00:06:42.167 Got JSON-RPC error response 00:06:42.167 response: 00:06:42.167 { 00:06:42.167 "code": -32601, 00:06:42.167 "message": "Method not found" 00:06:42.167 } 00:06:42.425 04:21:45 -- common/autotest_common.sh@653 -- # es=1 00:06:42.425 04:21:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.425 04:21:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.425 04:21:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.425 04:21:45 -- app/cmdline.sh@1 -- # killprocess 57312 00:06:42.426 04:21:45 -- common/autotest_common.sh@936 -- # '[' -z 57312 ']' 00:06:42.426 04:21:45 -- common/autotest_common.sh@940 -- # kill -0 57312 00:06:42.426 04:21:45 -- common/autotest_common.sh@941 -- # uname 00:06:42.426 04:21:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.426 04:21:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57312 00:06:42.426 04:21:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.426 killing process with pid 57312 00:06:42.426 04:21:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.426 04:21:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57312' 00:06:42.426 04:21:45 -- common/autotest_common.sh@955 -- # kill 57312 00:06:42.426 04:21:45 -- common/autotest_common.sh@960 -- # wait 57312 00:06:42.684 00:06:42.684 real 0m2.076s 00:06:42.684 user 0m2.694s 00:06:42.684 sys 0m0.377s 00:06:42.684 ************************************ 00:06:42.684 END TEST app_cmdline 00:06:42.684 ************************************ 00:06:42.684 04:21:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.684 04:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.684 04:21:45 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:42.684 04:21:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.684 04:21:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.684 04:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:42.684 ************************************ 00:06:42.684 START TEST version 00:06:42.684 ************************************ 00:06:42.684 04:21:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:42.684 * Looking for test storage... 00:06:42.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:42.684 04:21:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:42.684 04:21:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:42.684 04:21:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:42.942 04:21:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:42.942 04:21:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:42.942 04:21:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:42.942 04:21:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:42.942 04:21:45 -- scripts/common.sh@335 -- # IFS=.-: 00:06:42.942 04:21:45 -- scripts/common.sh@335 -- # read -ra ver1 00:06:42.942 04:21:45 -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.942 04:21:45 -- scripts/common.sh@336 -- # read -ra ver2 00:06:42.942 04:21:45 -- scripts/common.sh@337 -- # local 'op=<' 00:06:42.942 04:21:45 -- scripts/common.sh@339 -- # ver1_l=2 00:06:42.942 04:21:45 -- scripts/common.sh@340 -- # ver2_l=1 00:06:42.942 04:21:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:42.942 04:21:45 -- scripts/common.sh@343 -- # case "$op" in 00:06:42.942 04:21:45 -- scripts/common.sh@344 -- # : 1 00:06:42.942 04:21:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:42.942 04:21:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.942 04:21:45 -- scripts/common.sh@364 -- # decimal 1 00:06:42.942 04:21:45 -- scripts/common.sh@352 -- # local d=1 00:06:42.942 04:21:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.942 04:21:45 -- scripts/common.sh@354 -- # echo 1 00:06:42.942 04:21:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:42.942 04:21:45 -- scripts/common.sh@365 -- # decimal 2 00:06:42.942 04:21:45 -- scripts/common.sh@352 -- # local d=2 00:06:42.942 04:21:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.942 04:21:45 -- scripts/common.sh@354 -- # echo 2 00:06:42.942 04:21:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:42.942 04:21:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:42.942 04:21:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:42.942 04:21:45 -- scripts/common.sh@367 -- # return 0 00:06:42.942 04:21:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.942 04:21:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.942 --rc genhtml_branch_coverage=1 00:06:42.942 --rc genhtml_function_coverage=1 00:06:42.942 --rc genhtml_legend=1 00:06:42.942 --rc geninfo_all_blocks=1 00:06:42.942 --rc geninfo_unexecuted_blocks=1 00:06:42.942 00:06:42.942 ' 00:06:42.942 04:21:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.942 --rc genhtml_branch_coverage=1 00:06:42.942 --rc genhtml_function_coverage=1 00:06:42.942 --rc genhtml_legend=1 00:06:42.942 --rc geninfo_all_blocks=1 00:06:42.942 --rc geninfo_unexecuted_blocks=1 00:06:42.942 00:06:42.942 ' 00:06:42.942 04:21:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.942 --rc genhtml_branch_coverage=1 00:06:42.942 --rc genhtml_function_coverage=1 00:06:42.942 --rc genhtml_legend=1 00:06:42.942 --rc geninfo_all_blocks=1 00:06:42.942 --rc geninfo_unexecuted_blocks=1 00:06:42.942 00:06:42.942 ' 00:06:42.942 04:21:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:42.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.942 --rc genhtml_branch_coverage=1 00:06:42.942 --rc genhtml_function_coverage=1 00:06:42.942 --rc genhtml_legend=1 00:06:42.942 --rc geninfo_all_blocks=1 00:06:42.942 --rc geninfo_unexecuted_blocks=1 00:06:42.943 00:06:42.943 ' 00:06:42.943 04:21:45 -- app/version.sh@17 -- # get_header_version major 00:06:42.943 04:21:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.943 04:21:45 -- app/version.sh@14 -- # tr -d '"' 00:06:42.943 04:21:45 -- app/version.sh@14 -- # cut -f2 00:06:42.943 04:21:45 -- app/version.sh@17 -- # major=24 00:06:42.943 04:21:45 -- app/version.sh@18 -- # get_header_version minor 00:06:42.943 04:21:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.943 04:21:45 -- app/version.sh@14 -- # cut -f2 00:06:42.943 04:21:45 -- app/version.sh@14 -- # tr -d '"' 00:06:42.943 04:21:45 -- app/version.sh@18 -- # minor=1 00:06:42.943 04:21:45 -- app/version.sh@19 -- # get_header_version patch 00:06:42.943 04:21:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.943 04:21:45 -- app/version.sh@14 -- # cut -f2 00:06:42.943 04:21:45 -- app/version.sh@14 -- # tr -d '"' 00:06:42.943 04:21:46 -- app/version.sh@19 -- # patch=1 00:06:42.943 04:21:46 -- app/version.sh@20 -- # get_header_version suffix 00:06:42.943 04:21:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.943 04:21:46 -- app/version.sh@14 -- # cut -f2 00:06:42.943 04:21:46 -- app/version.sh@14 -- # tr -d '"' 00:06:42.943 04:21:46 -- app/version.sh@20 -- # suffix=-pre 00:06:42.943 04:21:46 -- app/version.sh@22 -- # version=24.1 00:06:42.943 04:21:46 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:42.943 04:21:46 -- app/version.sh@25 -- # version=24.1.1 00:06:42.943 04:21:46 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:42.943 04:21:46 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:42.943 04:21:46 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:42.943 04:21:46 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:42.943 04:21:46 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:42.943 00:06:42.943 real 0m0.256s 00:06:42.943 user 0m0.178s 00:06:42.943 sys 0m0.116s 00:06:42.943 04:21:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.943 ************************************ 00:06:42.943 END TEST version 00:06:42.943 ************************************ 00:06:42.943 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.943 04:21:46 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:42.943 04:21:46 -- spdk/autotest.sh@191 -- # uname -s 00:06:42.943 04:21:46 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:42.943 04:21:46 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:42.943 04:21:46 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:06:42.943 04:21:46 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:06:42.943 04:21:46 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:42.943 04:21:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.943 04:21:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.943 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.943 ************************************ 00:06:42.943 START TEST spdk_dd 00:06:42.943 ************************************ 00:06:42.943 04:21:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:43.201 * Looking for test storage... 00:06:43.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.201 04:21:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:43.201 04:21:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:43.201 04:21:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:43.201 04:21:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:43.202 04:21:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:43.202 04:21:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:43.202 04:21:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:43.202 04:21:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:43.202 04:21:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:43.202 04:21:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.202 04:21:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:43.202 04:21:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:43.202 04:21:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:43.202 04:21:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:43.202 04:21:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:43.202 04:21:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:43.202 04:21:46 -- scripts/common.sh@344 -- # : 1 00:06:43.202 04:21:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:43.202 04:21:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.202 04:21:46 -- scripts/common.sh@364 -- # decimal 1 00:06:43.202 04:21:46 -- scripts/common.sh@352 -- # local d=1 00:06:43.202 04:21:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.202 04:21:46 -- scripts/common.sh@354 -- # echo 1 00:06:43.202 04:21:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:43.202 04:21:46 -- scripts/common.sh@365 -- # decimal 2 00:06:43.202 04:21:46 -- scripts/common.sh@352 -- # local d=2 00:06:43.202 04:21:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.202 04:21:46 -- scripts/common.sh@354 -- # echo 2 00:06:43.202 04:21:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:43.202 04:21:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:43.202 04:21:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:43.202 04:21:46 -- scripts/common.sh@367 -- # return 0 00:06:43.202 04:21:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.202 04:21:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.202 --rc genhtml_branch_coverage=1 00:06:43.202 --rc genhtml_function_coverage=1 00:06:43.202 --rc genhtml_legend=1 00:06:43.202 --rc geninfo_all_blocks=1 00:06:43.202 --rc geninfo_unexecuted_blocks=1 00:06:43.202 00:06:43.202 ' 00:06:43.202 04:21:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.202 --rc genhtml_branch_coverage=1 00:06:43.202 --rc genhtml_function_coverage=1 00:06:43.202 --rc genhtml_legend=1 00:06:43.202 --rc geninfo_all_blocks=1 00:06:43.202 --rc geninfo_unexecuted_blocks=1 00:06:43.202 00:06:43.202 ' 00:06:43.202 04:21:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.202 --rc genhtml_branch_coverage=1 00:06:43.202 --rc genhtml_function_coverage=1 00:06:43.202 --rc genhtml_legend=1 00:06:43.202 --rc geninfo_all_blocks=1 00:06:43.202 --rc geninfo_unexecuted_blocks=1 00:06:43.202 00:06:43.202 ' 00:06:43.202 04:21:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:43.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.202 --rc genhtml_branch_coverage=1 00:06:43.202 --rc genhtml_function_coverage=1 00:06:43.202 --rc genhtml_legend=1 00:06:43.202 --rc geninfo_all_blocks=1 00:06:43.202 --rc geninfo_unexecuted_blocks=1 00:06:43.202 00:06:43.202 ' 00:06:43.202 04:21:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.202 04:21:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.202 04:21:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.202 04:21:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.202 04:21:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.202 04:21:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.202 04:21:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.202 04:21:46 -- paths/export.sh@5 -- # export PATH 00:06:43.202 04:21:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.202 04:21:46 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.460 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:43.460 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:43.460 04:21:46 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:43.460 04:21:46 -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:43.460 04:21:46 -- scripts/common.sh@311 -- # local bdf bdfs 00:06:43.460 04:21:46 -- scripts/common.sh@312 -- # local nvmes 00:06:43.460 04:21:46 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:06:43.460 04:21:46 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:43.460 04:21:46 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:06:43.460 04:21:46 -- scripts/common.sh@297 -- # local bdf= 00:06:43.460 04:21:46 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:06:43.460 04:21:46 -- scripts/common.sh@232 -- # local class 00:06:43.460 04:21:46 -- scripts/common.sh@233 -- # local subclass 00:06:43.460 04:21:46 -- scripts/common.sh@234 -- # local progif 00:06:43.460 04:21:46 -- scripts/common.sh@235 -- # printf %02x 1 00:06:43.718 04:21:46 -- scripts/common.sh@235 -- # class=01 00:06:43.718 04:21:46 -- scripts/common.sh@236 -- # printf %02x 8 00:06:43.718 04:21:46 -- scripts/common.sh@236 -- # subclass=08 00:06:43.718 04:21:46 -- scripts/common.sh@237 -- # printf %02x 2 00:06:43.718 04:21:46 -- scripts/common.sh@237 -- # progif=02 00:06:43.718 04:21:46 -- scripts/common.sh@239 -- # hash lspci 00:06:43.718 04:21:46 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:06:43.718 04:21:46 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:06:43.718 04:21:46 -- scripts/common.sh@242 -- # grep -i -- -p02 00:06:43.718 04:21:46 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:43.718 04:21:46 -- scripts/common.sh@244 -- # tr -d '"' 00:06:43.718 04:21:46 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:43.718 04:21:46 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:06:43.719 04:21:46 -- scripts/common.sh@15 -- # local i 00:06:43.719 04:21:46 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:06:43.719 04:21:46 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:43.719 04:21:46 -- scripts/common.sh@24 -- # return 0 00:06:43.719 04:21:46 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:06:43.719 04:21:46 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:43.719 04:21:46 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:06:43.719 04:21:46 -- scripts/common.sh@15 -- # local i 00:06:43.719 04:21:46 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:06:43.719 04:21:46 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:43.719 04:21:46 -- scripts/common.sh@24 -- # return 0 00:06:43.719 04:21:46 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:06:43.719 04:21:46 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:43.719 04:21:46 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:06:43.719 04:21:46 -- scripts/common.sh@322 -- # uname -s 00:06:43.719 04:21:46 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:43.719 04:21:46 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:43.719 04:21:46 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:43.719 04:21:46 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:06:43.719 04:21:46 -- scripts/common.sh@322 -- # uname -s 00:06:43.719 04:21:46 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:43.719 04:21:46 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:43.719 04:21:46 -- scripts/common.sh@327 -- # (( 2 )) 00:06:43.719 04:21:46 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:43.719 04:21:46 -- dd/dd.sh@13 -- # check_liburing 00:06:43.719 04:21:46 -- dd/common.sh@139 -- # local lib so 00:06:43.719 04:21:46 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:43.719 04:21:46 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:06:43.719 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.719 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:43.720 04:21:46 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:43.720 04:21:46 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:43.720 * spdk_dd linked to liburing 00:06:43.720 04:21:46 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:43.720 04:21:46 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:43.720 04:21:46 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:43.720 04:21:46 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:43.720 04:21:46 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:43.720 04:21:46 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:43.720 04:21:46 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:43.720 04:21:46 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:43.720 04:21:46 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:43.720 04:21:46 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:43.720 04:21:46 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:43.720 04:21:46 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:43.720 04:21:46 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:43.720 04:21:46 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:43.720 04:21:46 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:43.720 04:21:46 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:43.720 04:21:46 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:43.720 04:21:46 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:43.720 04:21:46 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:43.720 04:21:46 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:43.720 04:21:46 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:43.720 04:21:46 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:43.720 04:21:46 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:43.720 04:21:46 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:43.720 04:21:46 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:43.720 04:21:46 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:43.720 04:21:46 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:43.720 04:21:46 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:43.720 04:21:46 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:43.720 04:21:46 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:43.720 04:21:46 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:43.720 04:21:46 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:43.720 04:21:46 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:43.720 04:21:46 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:43.720 04:21:46 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:43.720 04:21:46 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:43.720 04:21:46 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:43.720 04:21:46 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:43.720 04:21:46 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:43.720 04:21:46 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:43.720 04:21:46 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:43.720 04:21:46 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:43.720 04:21:46 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:43.720 04:21:46 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:43.720 04:21:46 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:43.720 04:21:46 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:43.720 04:21:46 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:43.720 04:21:46 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:43.720 04:21:46 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:43.720 04:21:46 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:43.720 04:21:46 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:43.720 04:21:46 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:43.720 04:21:46 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:43.720 04:21:46 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:43.720 04:21:46 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:06:43.720 04:21:46 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:43.720 04:21:46 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:43.720 04:21:46 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:43.720 04:21:46 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:43.720 04:21:46 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:43.720 04:21:46 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:43.720 04:21:46 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:43.720 04:21:46 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:43.720 04:21:46 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:43.720 04:21:46 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:43.720 04:21:46 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:43.720 04:21:46 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:43.720 04:21:46 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:43.720 04:21:46 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:43.720 04:21:46 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:43.720 04:21:46 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:43.720 04:21:46 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:43.720 04:21:46 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:43.720 04:21:46 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:43.720 04:21:46 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:43.720 04:21:46 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:43.720 04:21:46 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:43.720 04:21:46 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:43.720 04:21:46 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:43.720 04:21:46 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:43.720 04:21:46 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:06:43.720 04:21:46 -- dd/common.sh@149 -- # [[ y != y ]] 00:06:43.720 04:21:46 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:43.720 04:21:46 -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:43.720 04:21:46 -- dd/common.sh@156 -- # liburing_in_use=1 00:06:43.720 04:21:46 -- dd/common.sh@157 -- # return 0 00:06:43.720 04:21:46 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:43.720 04:21:46 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:43.720 04:21:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:43.720 04:21:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.720 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:06:43.720 ************************************ 00:06:43.721 START TEST spdk_dd_basic_rw 00:06:43.721 ************************************ 00:06:43.721 04:21:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:43.721 * Looking for test storage... 00:06:43.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.721 04:21:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:43.721 04:21:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:43.721 04:21:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:43.979 04:21:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:43.979 04:21:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:43.979 04:21:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:43.979 04:21:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:43.979 04:21:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:43.979 04:21:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:43.979 04:21:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.979 04:21:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:43.979 04:21:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:43.979 04:21:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:43.979 04:21:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:43.979 04:21:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:43.979 04:21:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:43.979 04:21:46 -- scripts/common.sh@344 -- # : 1 00:06:43.979 04:21:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:43.979 04:21:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.979 04:21:46 -- scripts/common.sh@364 -- # decimal 1 00:06:43.979 04:21:46 -- scripts/common.sh@352 -- # local d=1 00:06:43.979 04:21:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.979 04:21:46 -- scripts/common.sh@354 -- # echo 1 00:06:43.979 04:21:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:43.979 04:21:46 -- scripts/common.sh@365 -- # decimal 2 00:06:43.979 04:21:46 -- scripts/common.sh@352 -- # local d=2 00:06:43.979 04:21:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.979 04:21:46 -- scripts/common.sh@354 -- # echo 2 00:06:43.979 04:21:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:43.979 04:21:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:43.979 04:21:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:43.979 04:21:46 -- scripts/common.sh@367 -- # return 0 00:06:43.979 04:21:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.979 04:21:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:43.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.979 --rc genhtml_branch_coverage=1 00:06:43.979 --rc genhtml_function_coverage=1 00:06:43.979 --rc genhtml_legend=1 00:06:43.979 --rc geninfo_all_blocks=1 00:06:43.979 --rc geninfo_unexecuted_blocks=1 00:06:43.979 00:06:43.979 ' 00:06:43.979 04:21:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:43.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.979 --rc genhtml_branch_coverage=1 00:06:43.979 --rc genhtml_function_coverage=1 00:06:43.979 --rc genhtml_legend=1 00:06:43.979 --rc geninfo_all_blocks=1 00:06:43.979 --rc geninfo_unexecuted_blocks=1 00:06:43.979 00:06:43.979 ' 00:06:43.979 04:21:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:43.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.979 --rc genhtml_branch_coverage=1 00:06:43.979 --rc genhtml_function_coverage=1 00:06:43.979 --rc genhtml_legend=1 00:06:43.979 --rc geninfo_all_blocks=1 00:06:43.979 --rc geninfo_unexecuted_blocks=1 00:06:43.979 00:06:43.979 ' 00:06:43.979 04:21:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:43.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.979 --rc genhtml_branch_coverage=1 00:06:43.979 --rc genhtml_function_coverage=1 00:06:43.979 --rc genhtml_legend=1 00:06:43.979 --rc geninfo_all_blocks=1 00:06:43.980 --rc geninfo_unexecuted_blocks=1 00:06:43.980 00:06:43.980 ' 00:06:43.980 04:21:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.980 04:21:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.980 04:21:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.980 04:21:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.980 04:21:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.980 04:21:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.980 04:21:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.980 04:21:46 -- paths/export.sh@5 -- # export PATH 00:06:43.980 04:21:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.980 04:21:46 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:43.980 04:21:46 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:43.980 04:21:46 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:43.980 04:21:46 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:06:43.980 04:21:46 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:43.980 04:21:46 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:43.980 04:21:46 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:43.980 04:21:46 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.980 04:21:46 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.980 04:21:46 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:06:43.980 04:21:46 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:06:43.980 04:21:46 -- dd/common.sh@126 -- # mapfile -t id 00:06:43.980 04:21:46 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:06:43.980 04:21:47 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2188 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:43.980 04:21:47 -- dd/common.sh@130 -- # lbaf=04 00:06:43.981 04:21:47 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2188 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:43.981 04:21:47 -- dd/common.sh@132 -- # lbaf=4096 00:06:43.981 04:21:47 -- dd/common.sh@134 -- # echo 4096 00:06:43.981 04:21:47 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:43.981 04:21:47 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:43.981 04:21:47 -- dd/basic_rw.sh@96 -- # : 00:06:43.981 04:21:47 -- dd/basic_rw.sh@96 -- # gen_conf 00:06:43.981 04:21:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:43.981 04:21:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.981 04:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.981 04:21:47 -- dd/common.sh@31 -- # xtrace_disable 00:06:43.981 04:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.981 ************************************ 00:06:43.981 START TEST dd_bs_lt_native_bs 00:06:43.981 ************************************ 00:06:43.981 04:21:47 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:43.981 04:21:47 -- common/autotest_common.sh@650 -- # local es=0 00:06:43.981 04:21:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:43.981 04:21:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.981 04:21:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.981 04:21:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.981 04:21:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.981 04:21:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.981 04:21:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.981 04:21:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.981 04:21:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.981 04:21:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:44.238 { 00:06:44.238 "subsystems": [ 00:06:44.238 { 00:06:44.238 "subsystem": "bdev", 00:06:44.238 "config": [ 00:06:44.238 { 00:06:44.238 "params": { 00:06:44.238 "trtype": "pcie", 00:06:44.238 "traddr": "0000:00:06.0", 00:06:44.238 "name": "Nvme0" 00:06:44.238 }, 00:06:44.238 "method": "bdev_nvme_attach_controller" 00:06:44.238 }, 00:06:44.238 { 00:06:44.238 "method": "bdev_wait_for_examine" 00:06:44.238 } 00:06:44.238 ] 00:06:44.238 } 00:06:44.238 ] 00:06:44.238 } 00:06:44.238 [2024-12-07 04:21:47.255722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.238 [2024-12-07 04:21:47.255825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57662 ] 00:06:44.238 [2024-12-07 04:21:47.395860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.238 [2024-12-07 04:21:47.465004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.496 [2024-12-07 04:21:47.588836] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:44.496 [2024-12-07 04:21:47.588934] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.496 [2024-12-07 04:21:47.672369] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:44.753 04:21:47 -- common/autotest_common.sh@653 -- # es=234 00:06:44.753 04:21:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.753 04:21:47 -- common/autotest_common.sh@662 -- # es=106 00:06:44.753 ************************************ 00:06:44.753 END TEST dd_bs_lt_native_bs 00:06:44.753 ************************************ 00:06:44.753 04:21:47 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:44.753 04:21:47 -- common/autotest_common.sh@670 -- # es=1 00:06:44.753 04:21:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.753 00:06:44.753 real 0m0.590s 00:06:44.753 user 0m0.429s 00:06:44.753 sys 0m0.121s 00:06:44.753 04:21:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.753 04:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:44.753 04:21:47 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:44.753 04:21:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:44.753 04:21:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.753 04:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:44.753 ************************************ 00:06:44.753 START TEST dd_rw 00:06:44.753 ************************************ 00:06:44.753 04:21:47 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:06:44.753 04:21:47 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:44.753 04:21:47 -- dd/basic_rw.sh@12 -- # local count size 00:06:44.753 04:21:47 -- dd/basic_rw.sh@13 -- # local qds bss 00:06:44.753 04:21:47 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:44.753 04:21:47 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:44.753 04:21:47 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:44.753 04:21:47 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:44.753 04:21:47 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:44.753 04:21:47 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:44.753 04:21:47 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:44.753 04:21:47 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:44.753 04:21:47 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:44.753 04:21:47 -- dd/basic_rw.sh@23 -- # count=15 00:06:44.753 04:21:47 -- dd/basic_rw.sh@24 -- # count=15 00:06:44.753 04:21:47 -- dd/basic_rw.sh@25 -- # size=61440 00:06:44.753 04:21:47 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:44.753 04:21:47 -- dd/common.sh@98 -- # xtrace_disable 00:06:44.753 04:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:45.321 04:21:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:45.321 04:21:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:45.321 04:21:48 -- dd/common.sh@31 -- # xtrace_disable 00:06:45.321 04:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.321 [2024-12-07 04:21:48.484342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.321 [2024-12-07 04:21:48.484775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57695 ] 00:06:45.321 { 00:06:45.321 "subsystems": [ 00:06:45.321 { 00:06:45.321 "subsystem": "bdev", 00:06:45.321 "config": [ 00:06:45.321 { 00:06:45.321 "params": { 00:06:45.321 "trtype": "pcie", 00:06:45.321 "traddr": "0000:00:06.0", 00:06:45.321 "name": "Nvme0" 00:06:45.321 }, 00:06:45.321 "method": "bdev_nvme_attach_controller" 00:06:45.321 }, 00:06:45.321 { 00:06:45.321 "method": "bdev_wait_for_examine" 00:06:45.321 } 00:06:45.321 ] 00:06:45.321 } 00:06:45.321 ] 00:06:45.321 } 00:06:45.580 [2024-12-07 04:21:48.623118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.580 [2024-12-07 04:21:48.674793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.580  [2024-12-07T04:21:49.079Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:45.839 00:06:45.839 04:21:48 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:45.839 04:21:48 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:45.839 04:21:48 -- dd/common.sh@31 -- # xtrace_disable 00:06:45.839 04:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.839 [2024-12-07 04:21:49.022337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.839 [2024-12-07 04:21:49.022419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57707 ] 00:06:45.839 { 00:06:45.839 "subsystems": [ 00:06:45.839 { 00:06:45.839 "subsystem": "bdev", 00:06:45.839 "config": [ 00:06:45.839 { 00:06:45.839 "params": { 00:06:45.839 "trtype": "pcie", 00:06:45.839 "traddr": "0000:00:06.0", 00:06:45.839 "name": "Nvme0" 00:06:45.839 }, 00:06:45.839 "method": "bdev_nvme_attach_controller" 00:06:45.839 }, 00:06:45.839 { 00:06:45.839 "method": "bdev_wait_for_examine" 00:06:45.839 } 00:06:45.839 ] 00:06:45.839 } 00:06:45.839 ] 00:06:45.839 } 00:06:46.110 [2024-12-07 04:21:49.150908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.110 [2024-12-07 04:21:49.199450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.110  [2024-12-07T04:21:49.653Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:46.413 00:06:46.413 04:21:49 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.413 04:21:49 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:46.413 04:21:49 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.413 04:21:49 -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.413 04:21:49 -- dd/common.sh@12 -- # local size=61440 00:06:46.413 04:21:49 -- dd/common.sh@14 -- # local bs=1048576 00:06:46.413 04:21:49 -- dd/common.sh@15 -- # local count=1 00:06:46.413 04:21:49 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.413 04:21:49 -- dd/common.sh@18 -- # gen_conf 00:06:46.413 04:21:49 -- dd/common.sh@31 -- # xtrace_disable 00:06:46.413 04:21:49 -- common/autotest_common.sh@10 -- # set +x 00:06:46.413 [2024-12-07 04:21:49.552194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.413 [2024-12-07 04:21:49.552327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57721 ] 00:06:46.413 { 00:06:46.413 "subsystems": [ 00:06:46.413 { 00:06:46.413 "subsystem": "bdev", 00:06:46.413 "config": [ 00:06:46.413 { 00:06:46.413 "params": { 00:06:46.413 "trtype": "pcie", 00:06:46.413 "traddr": "0000:00:06.0", 00:06:46.413 "name": "Nvme0" 00:06:46.413 }, 00:06:46.413 "method": "bdev_nvme_attach_controller" 00:06:46.413 }, 00:06:46.413 { 00:06:46.413 "method": "bdev_wait_for_examine" 00:06:46.413 } 00:06:46.413 ] 00:06:46.413 } 00:06:46.413 ] 00:06:46.413 } 00:06:46.674 [2024-12-07 04:21:49.687514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.674 [2024-12-07 04:21:49.741308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.674  [2024-12-07T04:21:50.172Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:46.932 00:06:46.932 04:21:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:46.932 04:21:50 -- dd/basic_rw.sh@23 -- # count=15 00:06:46.932 04:21:50 -- dd/basic_rw.sh@24 -- # count=15 00:06:46.932 04:21:50 -- dd/basic_rw.sh@25 -- # size=61440 00:06:46.932 04:21:50 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:46.932 04:21:50 -- dd/common.sh@98 -- # xtrace_disable 00:06:46.932 04:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.502 04:21:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:47.502 04:21:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:47.502 04:21:50 -- dd/common.sh@31 -- # xtrace_disable 00:06:47.502 04:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.502 [2024-12-07 04:21:50.668059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.502 [2024-12-07 04:21:50.668449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57739 ] 00:06:47.502 { 00:06:47.502 "subsystems": [ 00:06:47.502 { 00:06:47.502 "subsystem": "bdev", 00:06:47.502 "config": [ 00:06:47.502 { 00:06:47.502 "params": { 00:06:47.502 "trtype": "pcie", 00:06:47.502 "traddr": "0000:00:06.0", 00:06:47.502 "name": "Nvme0" 00:06:47.502 }, 00:06:47.502 "method": "bdev_nvme_attach_controller" 00:06:47.502 }, 00:06:47.502 { 00:06:47.502 "method": "bdev_wait_for_examine" 00:06:47.502 } 00:06:47.502 ] 00:06:47.502 } 00:06:47.502 ] 00:06:47.502 } 00:06:47.762 [2024-12-07 04:21:50.803847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.762 [2024-12-07 04:21:50.854999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.762  [2024-12-07T04:21:51.261Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:48.021 00:06:48.021 04:21:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:48.021 04:21:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:48.021 04:21:51 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.021 04:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:48.022 [2024-12-07 04:21:51.229433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.022 [2024-12-07 04:21:51.229580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57757 ] 00:06:48.022 { 00:06:48.022 "subsystems": [ 00:06:48.022 { 00:06:48.022 "subsystem": "bdev", 00:06:48.022 "config": [ 00:06:48.022 { 00:06:48.022 "params": { 00:06:48.022 "trtype": "pcie", 00:06:48.022 "traddr": "0000:00:06.0", 00:06:48.022 "name": "Nvme0" 00:06:48.022 }, 00:06:48.022 "method": "bdev_nvme_attach_controller" 00:06:48.022 }, 00:06:48.022 { 00:06:48.022 "method": "bdev_wait_for_examine" 00:06:48.022 } 00:06:48.022 ] 00:06:48.022 } 00:06:48.022 ] 00:06:48.022 } 00:06:48.280 [2024-12-07 04:21:51.363542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.280 [2024-12-07 04:21:51.412629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.280  [2024-12-07T04:21:51.779Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:48.539 00:06:48.539 04:21:51 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.539 04:21:51 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:48.539 04:21:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.539 04:21:51 -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.539 04:21:51 -- dd/common.sh@12 -- # local size=61440 00:06:48.539 04:21:51 -- dd/common.sh@14 -- # local bs=1048576 00:06:48.539 04:21:51 -- dd/common.sh@15 -- # local count=1 00:06:48.539 04:21:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.539 04:21:51 -- dd/common.sh@18 -- # gen_conf 00:06:48.539 04:21:51 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.539 04:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:48.539 [2024-12-07 04:21:51.761906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.539 [2024-12-07 04:21:51.761998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57765 ] 00:06:48.539 { 00:06:48.539 "subsystems": [ 00:06:48.539 { 00:06:48.539 "subsystem": "bdev", 00:06:48.539 "config": [ 00:06:48.539 { 00:06:48.539 "params": { 00:06:48.539 "trtype": "pcie", 00:06:48.539 "traddr": "0000:00:06.0", 00:06:48.539 "name": "Nvme0" 00:06:48.539 }, 00:06:48.539 "method": "bdev_nvme_attach_controller" 00:06:48.539 }, 00:06:48.539 { 00:06:48.539 "method": "bdev_wait_for_examine" 00:06:48.539 } 00:06:48.539 ] 00:06:48.539 } 00:06:48.539 ] 00:06:48.539 } 00:06:48.799 [2024-12-07 04:21:51.898682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.799 [2024-12-07 04:21:51.946400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.057  [2024-12-07T04:21:52.297Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:49.057 00:06:49.057 04:21:52 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:49.057 04:21:52 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.057 04:21:52 -- dd/basic_rw.sh@23 -- # count=7 00:06:49.057 04:21:52 -- dd/basic_rw.sh@24 -- # count=7 00:06:49.057 04:21:52 -- dd/basic_rw.sh@25 -- # size=57344 00:06:49.057 04:21:52 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:49.057 04:21:52 -- dd/common.sh@98 -- # xtrace_disable 00:06:49.057 04:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.626 04:21:52 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:49.626 04:21:52 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:49.626 04:21:52 -- dd/common.sh@31 -- # xtrace_disable 00:06:49.626 04:21:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.626 { 00:06:49.626 "subsystems": [ 00:06:49.626 { 00:06:49.626 "subsystem": "bdev", 00:06:49.626 "config": [ 00:06:49.626 { 00:06:49.626 "params": { 00:06:49.626 "trtype": "pcie", 00:06:49.626 "traddr": "0000:00:06.0", 00:06:49.626 "name": "Nvme0" 00:06:49.626 }, 00:06:49.626 "method": "bdev_nvme_attach_controller" 00:06:49.626 }, 00:06:49.626 { 00:06:49.626 "method": "bdev_wait_for_examine" 00:06:49.626 } 00:06:49.626 ] 00:06:49.626 } 00:06:49.626 ] 00:06:49.626 } 00:06:49.626 [2024-12-07 04:21:52.845714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.626 [2024-12-07 04:21:52.845825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57783 ] 00:06:49.886 [2024-12-07 04:21:52.981636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.886 [2024-12-07 04:21:53.040203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.145  [2024-12-07T04:21:53.385Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:50.145 00:06:50.145 04:21:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:50.145 04:21:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:50.145 04:21:53 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.145 04:21:53 -- common/autotest_common.sh@10 -- # set +x 00:06:50.404 [2024-12-07 04:21:53.390330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.404 [2024-12-07 04:21:53.390664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57801 ] 00:06:50.404 { 00:06:50.404 "subsystems": [ 00:06:50.404 { 00:06:50.404 "subsystem": "bdev", 00:06:50.404 "config": [ 00:06:50.404 { 00:06:50.404 "params": { 00:06:50.404 "trtype": "pcie", 00:06:50.404 "traddr": "0000:00:06.0", 00:06:50.404 "name": "Nvme0" 00:06:50.404 }, 00:06:50.404 "method": "bdev_nvme_attach_controller" 00:06:50.404 }, 00:06:50.404 { 00:06:50.404 "method": "bdev_wait_for_examine" 00:06:50.404 } 00:06:50.404 ] 00:06:50.404 } 00:06:50.404 ] 00:06:50.404 } 00:06:50.404 [2024-12-07 04:21:53.522781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.404 [2024-12-07 04:21:53.576923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.664  [2024-12-07T04:21:53.904Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:50.664 00:06:50.664 04:21:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.664 04:21:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:50.664 04:21:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:50.664 04:21:53 -- dd/common.sh@11 -- # local nvme_ref= 00:06:50.664 04:21:53 -- dd/common.sh@12 -- # local size=57344 00:06:50.664 04:21:53 -- dd/common.sh@14 -- # local bs=1048576 00:06:50.664 04:21:53 -- dd/common.sh@15 -- # local count=1 00:06:50.664 04:21:53 -- dd/common.sh@18 -- # gen_conf 00:06:50.664 04:21:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:50.664 04:21:53 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.664 04:21:53 -- common/autotest_common.sh@10 -- # set +x 00:06:50.924 [2024-12-07 04:21:53.943762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.924 [2024-12-07 04:21:53.943856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57809 ] 00:06:50.924 { 00:06:50.924 "subsystems": [ 00:06:50.924 { 00:06:50.924 "subsystem": "bdev", 00:06:50.924 "config": [ 00:06:50.924 { 00:06:50.924 "params": { 00:06:50.924 "trtype": "pcie", 00:06:50.924 "traddr": "0000:00:06.0", 00:06:50.924 "name": "Nvme0" 00:06:50.924 }, 00:06:50.924 "method": "bdev_nvme_attach_controller" 00:06:50.924 }, 00:06:50.924 { 00:06:50.924 "method": "bdev_wait_for_examine" 00:06:50.924 } 00:06:50.924 ] 00:06:50.924 } 00:06:50.924 ] 00:06:50.924 } 00:06:50.924 [2024-12-07 04:21:54.078860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.924 [2024-12-07 04:21:54.137342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.183  [2024-12-07T04:21:54.682Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:51.442 00:06:51.442 04:21:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:51.442 04:21:54 -- dd/basic_rw.sh@23 -- # count=7 00:06:51.442 04:21:54 -- dd/basic_rw.sh@24 -- # count=7 00:06:51.442 04:21:54 -- dd/basic_rw.sh@25 -- # size=57344 00:06:51.442 04:21:54 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:51.442 04:21:54 -- dd/common.sh@98 -- # xtrace_disable 00:06:51.442 04:21:54 -- common/autotest_common.sh@10 -- # set +x 00:06:52.011 04:21:54 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:52.011 04:21:54 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:52.011 04:21:54 -- dd/common.sh@31 -- # xtrace_disable 00:06:52.011 04:21:54 -- common/autotest_common.sh@10 -- # set +x 00:06:52.011 [2024-12-07 04:21:55.035150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.011 [2024-12-07 04:21:55.035397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57827 ] 00:06:52.011 { 00:06:52.011 "subsystems": [ 00:06:52.011 { 00:06:52.011 "subsystem": "bdev", 00:06:52.011 "config": [ 00:06:52.011 { 00:06:52.011 "params": { 00:06:52.011 "trtype": "pcie", 00:06:52.011 "traddr": "0000:00:06.0", 00:06:52.011 "name": "Nvme0" 00:06:52.011 }, 00:06:52.011 "method": "bdev_nvme_attach_controller" 00:06:52.011 }, 00:06:52.011 { 00:06:52.011 "method": "bdev_wait_for_examine" 00:06:52.011 } 00:06:52.011 ] 00:06:52.011 } 00:06:52.011 ] 00:06:52.011 } 00:06:52.011 [2024-12-07 04:21:55.167285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.011 [2024-12-07 04:21:55.219089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.269  [2024-12-07T04:21:55.770Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:52.530 00:06:52.530 04:21:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:52.530 04:21:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:52.530 04:21:55 -- dd/common.sh@31 -- # xtrace_disable 00:06:52.530 04:21:55 -- common/autotest_common.sh@10 -- # set +x 00:06:52.530 [2024-12-07 04:21:55.579334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.530 [2024-12-07 04:21:55.579454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57845 ] 00:06:52.530 { 00:06:52.530 "subsystems": [ 00:06:52.530 { 00:06:52.530 "subsystem": "bdev", 00:06:52.530 "config": [ 00:06:52.530 { 00:06:52.530 "params": { 00:06:52.530 "trtype": "pcie", 00:06:52.530 "traddr": "0000:00:06.0", 00:06:52.530 "name": "Nvme0" 00:06:52.530 }, 00:06:52.530 "method": "bdev_nvme_attach_controller" 00:06:52.530 }, 00:06:52.530 { 00:06:52.530 "method": "bdev_wait_for_examine" 00:06:52.530 } 00:06:52.530 ] 00:06:52.530 } 00:06:52.530 ] 00:06:52.530 } 00:06:52.530 [2024-12-07 04:21:55.716215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.791 [2024-12-07 04:21:55.772460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.791  [2024-12-07T04:21:56.289Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:53.049 00:06:53.049 04:21:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.049 04:21:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:53.049 04:21:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:53.049 04:21:56 -- dd/common.sh@11 -- # local nvme_ref= 00:06:53.049 04:21:56 -- dd/common.sh@12 -- # local size=57344 00:06:53.049 04:21:56 -- dd/common.sh@14 -- # local bs=1048576 00:06:53.049 04:21:56 -- dd/common.sh@15 -- # local count=1 00:06:53.049 04:21:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:53.049 04:21:56 -- dd/common.sh@18 -- # gen_conf 00:06:53.049 04:21:56 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.049 04:21:56 -- common/autotest_common.sh@10 -- # set +x 00:06:53.049 [2024-12-07 04:21:56.123244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.049 [2024-12-07 04:21:56.123336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57857 ] 00:06:53.049 { 00:06:53.049 "subsystems": [ 00:06:53.049 { 00:06:53.049 "subsystem": "bdev", 00:06:53.049 "config": [ 00:06:53.049 { 00:06:53.049 "params": { 00:06:53.049 "trtype": "pcie", 00:06:53.049 "traddr": "0000:00:06.0", 00:06:53.049 "name": "Nvme0" 00:06:53.049 }, 00:06:53.049 "method": "bdev_nvme_attach_controller" 00:06:53.049 }, 00:06:53.049 { 00:06:53.049 "method": "bdev_wait_for_examine" 00:06:53.049 } 00:06:53.049 ] 00:06:53.049 } 00:06:53.049 ] 00:06:53.049 } 00:06:53.049 [2024-12-07 04:21:56.259855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.306 [2024-12-07 04:21:56.313590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.306  [2024-12-07T04:21:56.805Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:53.565 00:06:53.565 04:21:56 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:53.565 04:21:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:53.565 04:21:56 -- dd/basic_rw.sh@23 -- # count=3 00:06:53.565 04:21:56 -- dd/basic_rw.sh@24 -- # count=3 00:06:53.565 04:21:56 -- dd/basic_rw.sh@25 -- # size=49152 00:06:53.565 04:21:56 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:53.565 04:21:56 -- dd/common.sh@98 -- # xtrace_disable 00:06:53.565 04:21:56 -- common/autotest_common.sh@10 -- # set +x 00:06:53.824 04:21:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:53.824 04:21:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:53.824 04:21:57 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.824 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:06:54.081 [2024-12-07 04:21:57.106013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.081 [2024-12-07 04:21:57.106260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57871 ] 00:06:54.081 { 00:06:54.081 "subsystems": [ 00:06:54.081 { 00:06:54.081 "subsystem": "bdev", 00:06:54.081 "config": [ 00:06:54.081 { 00:06:54.081 "params": { 00:06:54.081 "trtype": "pcie", 00:06:54.081 "traddr": "0000:00:06.0", 00:06:54.081 "name": "Nvme0" 00:06:54.081 }, 00:06:54.081 "method": "bdev_nvme_attach_controller" 00:06:54.081 }, 00:06:54.081 { 00:06:54.081 "method": "bdev_wait_for_examine" 00:06:54.081 } 00:06:54.081 ] 00:06:54.081 } 00:06:54.081 ] 00:06:54.081 } 00:06:54.081 [2024-12-07 04:21:57.239815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.081 [2024-12-07 04:21:57.300545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.340  [2024-12-07T04:21:57.839Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:54.599 00:06:54.599 04:21:57 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:54.599 04:21:57 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:54.599 04:21:57 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.599 04:21:57 -- common/autotest_common.sh@10 -- # set +x 00:06:54.599 [2024-12-07 04:21:57.657796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.599 [2024-12-07 04:21:57.657896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57889 ] 00:06:54.599 { 00:06:54.599 "subsystems": [ 00:06:54.599 { 00:06:54.599 "subsystem": "bdev", 00:06:54.599 "config": [ 00:06:54.599 { 00:06:54.599 "params": { 00:06:54.599 "trtype": "pcie", 00:06:54.599 "traddr": "0000:00:06.0", 00:06:54.599 "name": "Nvme0" 00:06:54.599 }, 00:06:54.599 "method": "bdev_nvme_attach_controller" 00:06:54.599 }, 00:06:54.599 { 00:06:54.599 "method": "bdev_wait_for_examine" 00:06:54.599 } 00:06:54.599 ] 00:06:54.599 } 00:06:54.599 ] 00:06:54.599 } 00:06:54.599 [2024-12-07 04:21:57.795567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.858 [2024-12-07 04:21:57.849363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.858  [2024-12-07T04:21:58.357Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:55.117 00:06:55.117 04:21:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.117 04:21:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:55.117 04:21:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:55.117 04:21:58 -- dd/common.sh@11 -- # local nvme_ref= 00:06:55.117 04:21:58 -- dd/common.sh@12 -- # local size=49152 00:06:55.117 04:21:58 -- dd/common.sh@14 -- # local bs=1048576 00:06:55.117 04:21:58 -- dd/common.sh@15 -- # local count=1 00:06:55.117 04:21:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:55.117 04:21:58 -- dd/common.sh@18 -- # gen_conf 00:06:55.117 04:21:58 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.117 04:21:58 -- common/autotest_common.sh@10 -- # set +x 00:06:55.117 [2024-12-07 04:21:58.213271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.117 [2024-12-07 04:21:58.213362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57898 ] 00:06:55.117 { 00:06:55.117 "subsystems": [ 00:06:55.117 { 00:06:55.117 "subsystem": "bdev", 00:06:55.117 "config": [ 00:06:55.117 { 00:06:55.117 "params": { 00:06:55.117 "trtype": "pcie", 00:06:55.117 "traddr": "0000:00:06.0", 00:06:55.117 "name": "Nvme0" 00:06:55.117 }, 00:06:55.117 "method": "bdev_nvme_attach_controller" 00:06:55.117 }, 00:06:55.117 { 00:06:55.117 "method": "bdev_wait_for_examine" 00:06:55.117 } 00:06:55.117 ] 00:06:55.117 } 00:06:55.117 ] 00:06:55.117 } 00:06:55.117 [2024-12-07 04:21:58.352994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.376 [2024-12-07 04:21:58.410406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.376  [2024-12-07T04:21:58.874Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:55.634 00:06:55.634 04:21:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:55.634 04:21:58 -- dd/basic_rw.sh@23 -- # count=3 00:06:55.634 04:21:58 -- dd/basic_rw.sh@24 -- # count=3 00:06:55.634 04:21:58 -- dd/basic_rw.sh@25 -- # size=49152 00:06:55.634 04:21:58 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:55.634 04:21:58 -- dd/common.sh@98 -- # xtrace_disable 00:06:55.634 04:21:58 -- common/autotest_common.sh@10 -- # set +x 00:06:56.201 04:21:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:56.201 04:21:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:56.201 04:21:59 -- dd/common.sh@31 -- # xtrace_disable 00:06:56.201 04:21:59 -- common/autotest_common.sh@10 -- # set +x 00:06:56.201 [2024-12-07 04:21:59.235953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.201 [2024-12-07 04:21:59.236191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57915 ] 00:06:56.201 { 00:06:56.201 "subsystems": [ 00:06:56.201 { 00:06:56.201 "subsystem": "bdev", 00:06:56.201 "config": [ 00:06:56.201 { 00:06:56.201 "params": { 00:06:56.201 "trtype": "pcie", 00:06:56.201 "traddr": "0000:00:06.0", 00:06:56.201 "name": "Nvme0" 00:06:56.201 }, 00:06:56.201 "method": "bdev_nvme_attach_controller" 00:06:56.201 }, 00:06:56.201 { 00:06:56.201 "method": "bdev_wait_for_examine" 00:06:56.201 } 00:06:56.201 ] 00:06:56.201 } 00:06:56.201 ] 00:06:56.201 } 00:06:56.201 [2024-12-07 04:21:59.373424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.201 [2024-12-07 04:21:59.429449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.460  [2024-12-07T04:21:59.960Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:56.720 00:06:56.720 04:21:59 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:56.720 04:21:59 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:56.720 04:21:59 -- dd/common.sh@31 -- # xtrace_disable 00:06:56.720 04:21:59 -- common/autotest_common.sh@10 -- # set +x 00:06:56.720 [2024-12-07 04:21:59.790610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.720 [2024-12-07 04:21:59.790716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57933 ] 00:06:56.720 { 00:06:56.720 "subsystems": [ 00:06:56.720 { 00:06:56.720 "subsystem": "bdev", 00:06:56.720 "config": [ 00:06:56.720 { 00:06:56.720 "params": { 00:06:56.720 "trtype": "pcie", 00:06:56.720 "traddr": "0000:00:06.0", 00:06:56.720 "name": "Nvme0" 00:06:56.720 }, 00:06:56.720 "method": "bdev_nvme_attach_controller" 00:06:56.720 }, 00:06:56.720 { 00:06:56.720 "method": "bdev_wait_for_examine" 00:06:56.720 } 00:06:56.720 ] 00:06:56.720 } 00:06:56.720 ] 00:06:56.720 } 00:06:56.720 [2024-12-07 04:21:59.921786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.979 [2024-12-07 04:21:59.993409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.979  [2024-12-07T04:22:00.478Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:57.238 00:06:57.238 04:22:00 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.238 04:22:00 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:57.238 04:22:00 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.238 04:22:00 -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.238 04:22:00 -- dd/common.sh@12 -- # local size=49152 00:06:57.238 04:22:00 -- dd/common.sh@14 -- # local bs=1048576 00:06:57.238 04:22:00 -- dd/common.sh@15 -- # local count=1 00:06:57.238 04:22:00 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:57.238 04:22:00 -- dd/common.sh@18 -- # gen_conf 00:06:57.238 04:22:00 -- dd/common.sh@31 -- # xtrace_disable 00:06:57.238 04:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:57.238 [2024-12-07 04:22:00.377375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.238 [2024-12-07 04:22:00.377464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57952 ] 00:06:57.238 { 00:06:57.238 "subsystems": [ 00:06:57.238 { 00:06:57.238 "subsystem": "bdev", 00:06:57.238 "config": [ 00:06:57.238 { 00:06:57.238 "params": { 00:06:57.238 "trtype": "pcie", 00:06:57.238 "traddr": "0000:00:06.0", 00:06:57.238 "name": "Nvme0" 00:06:57.238 }, 00:06:57.238 "method": "bdev_nvme_attach_controller" 00:06:57.238 }, 00:06:57.238 { 00:06:57.238 "method": "bdev_wait_for_examine" 00:06:57.238 } 00:06:57.238 ] 00:06:57.238 } 00:06:57.238 ] 00:06:57.238 } 00:06:57.497 [2024-12-07 04:22:00.515575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.497 [2024-12-07 04:22:00.587429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.497  [2024-12-07T04:22:00.996Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:57.756 00:06:57.756 ************************************ 00:06:57.756 END TEST dd_rw 00:06:57.756 ************************************ 00:06:57.756 00:06:57.756 real 0m13.050s 00:06:57.756 user 0m9.805s 00:06:57.756 sys 0m2.157s 00:06:57.756 04:22:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.756 04:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:57.756 04:22:00 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:57.756 04:22:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.756 04:22:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.756 04:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:57.756 ************************************ 00:06:57.756 START TEST dd_rw_offset 00:06:57.756 ************************************ 00:06:57.756 04:22:00 -- common/autotest_common.sh@1114 -- # basic_offset 00:06:57.756 04:22:00 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:57.756 04:22:00 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:57.756 04:22:00 -- dd/common.sh@98 -- # xtrace_disable 00:06:57.756 04:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.016 04:22:00 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:58.016 04:22:00 -- dd/basic_rw.sh@56 -- # data=bjvtg6f20evpcszcy8l192fmivl5i7yscxddt3tephcc6ydpzf1s5l4mn1kt063dr856da1gs6tnzdgbdekk44rpglp69wo6llaq4qnaz0drgnvj05yc31lxpjevx1xjeikctglayll3vhiud3niphaqt14ph0mvzjwdmbhx181bbhyq50lu5ur4rlvfdqr6orxyvnxv1th8229ki6p8dapewsnhlc1fm2myx2hsw1yy9cxh2tok7zml39h42tfmehwmrc605iy4be53adq55l2npo3hc69nf1hw0cwa6psbdlm5jcqtq5rm9s5swst5vebi6g6pl4rh6rzn9jvwrkprqsbvhxnb202ogrou2shw6l3y0oagzyaxbdsig8ak3qg0bezrhhs33cquibp4ds214v4kw42eiwv766522xd0qjuyqtj88glbxf88qoltk0yna6utp08meudrxek2nml0lykfelbwoauwd2d6xmt98oqhko8oya4577hfk1yzknzzjogaga7k3g4b4le6dklyggfy28pbp4jm20lwkvcsa86bucw34s1nzfyzv4v9h9a2r14z59kgkit8phq3v35w3hs3d8p8ivwjied4mlw58xrpmjhdj5eh68sqhev35d54po71xlshrq0u36zgvn771ld1tqv2e62kx621r5pp9o3h5sifmp8vje5cj7cg41acylj7ny37o6x5ha1vu3lr5xniu4r7vcvbzgwnbjjv3fcj52yn7ifurb3zybrf36ynr2207ui4r01z7bh83z8s19dkd7euxz3qqlpb41jkw79t39duz4o7k9jpvhu4uflfdra0gyqyrrno07e41wl6srjz1mapyme4kfrso8kuzjxe3k1rqw5i1rqbki5vxy3acez9dimqned5pog9cieqc51s7xkb47qdswik0m55p3463c4ifs1rp9zmizba6gk8rki3lia0pwu64o995je10ovrezvgl1qhssx5unv692hdxgvkgzio1of6lm8nqgtvgdinb4iexyf9dn4tonbicprvuzsldaongyrs22pl2o0bdl28399nr5usrxeexrp6m8qjfvv6wfy5ru7efhqjk26kkawqln801495im46mnbt2h1iarwun01zrkgn6k2te7bjzrbzkpau3o2fgv0r2gf4um92ho7n4x1jrdbjkes0l9y8e1pfxk1og8uslovbic47dulhtk5hn2evjgbjjdua9dtuc5rfvfevrvbi3ep8yj6coish8yz91p6hnsfnh0tnyy3k4k0illb030alufxwli778bd2bxtzlpx8mv38cklpdj3349l4l4a2u8fm3r5s0q300lvm8wq8ko92ljyudn3c3aux2xm992ifwvgf9s6xi34bug84cr0k9vekj8t3yd40l5i726h333nhlxk1dr0tzgd30d5u5v6iy0naceyd5n2at43pv3fqetlovwgdf3zjjzivhu4nahtdb2nhcz9aeopcxqez17f0f4qmjdyv564v09l7g2mgvefa1h22smdhfqm7f0rw7g1sht1x7wr2ss2bvab1inmn2cnrctmc446lght1wi9s7vippaxs9g76ppm8pjr4hcmbnhez9mtwpqxbqkfkdajdpxtgobddxeo67p55ml21skyvyrsg3lt5teuxlc3nue9fq7k5t54m0tb3hpxuak62ufxqvfathb1w0210rb5fi4gqgkwutngpop6pcm3w0pyh2spbh1kw5y700j1udi7r04mxm3gsxduyygvj32jghppc98vfz99llk2raqta04wd0tdhgern8qm78onjsh76z9y1jaddacqqg3a2xdgzooee3vf40nidl3kqb2jjejdulphw8t97eq34ftu14e8hpj4azoc13am1x8vtcgsbofc2ug8b2055106290etbwm7pncpd4p1lc7bxlh5mbnx2mc79gp4mu5h087u8zjc32h0rv1rom0qx7kqqd3qvyq2mpw6hp1rcuutnyijak5j4ya0yizrpld5icmzie1bquzxgw832wg07fx3daivnvvsd1nbf0t841gjlqnu1o330u3m3f1rt200iq96x0tqwvdhamxrb96wjslc3ot0zqt7v1am3z8ndcw7ngnkaxqqkmu0tbol328j844l4ln84xw6arr184hid97rxbhpuaxpb270k5d3bkq47z1okm4iww1hd29e3jzprvuwy3s22vjiq2yboor3s1qa1xcgcxtnaqi9ea70busi69iwp8u5s978x7mdpanu00sqim6jl3xnpuz3xe04gw79nwrkd1dqw1w8e5wntpa23yev2w0v9hzsapi8zyekwxx4xzpey8d19o0o3vbbogu8otuyk9rdgft7nxqv4vhu0ld050tsdtbl7vezr2jmtv8li78qks7461fm4kld4o5m64rndq9iqr1w3i421d285758veurq3gvobuvvwkjd883ztm25teos099d1juu92wy6b2j76mvtcfytxieufpxx22jub4k17a8gaz6il4yhbdryerm1i7muxfzl1i3405ztld0zii9sla04g9gsfr2z0ciki4igbtg434q7xaf9g0m0ebrqd7tm37nr19jyvfgs4nvfbb6y24frzho23xl40j3fq9h8fe4v1izud9d32wqzgyou7j7217jdfpgi39abs77brh7rxg1rt2fsjej090pz5wciampf4vastqvfqyuit05v5xbpilmbucqugvcq3qngjn1j97oedi4kw4osan8yozcevjaxlern1t10qxzfrtvoaptl17aftk8ke3fml6rrjqi0rtt2d8u8acnudeme5f50i8qg18g3aujsi8dqve8ql2lv2wxrne8a0gwg03p5tz4dry63454g4h13p8fx3cbwfk9ukyvlnf6vkbh2q8b3pwxts1rddwnr3dglzv2sv18p05s79yh8irpju6cjb2tbp1j86tdukywp96d57zq2isqzluvclvlxkfkj0tsj9n4010ij7yyja5kivvadu9992q9cgxsqufsu22vqr9jk5p8rpx6brqu4eytmysznv0ja07k1vp2b25mduwt3x6ouzcgnoq7hnqei4kxpk4ubguuzzq3oapm4yo6k4oll7f2sq9fg5fbg25u7h4nojamn6f0pz2gzbej93lcy54ko2b79xpd5x6s11ylky9qzu70zxarc6cxgg3d7wz27xsve1t8r8voevpzu2o7q8m9almjp183i64zlzl0scfzyv608meiym77lsbfbajdk1y8zril4u6jsdwqlmgphbmk7tfm6wf62e55d4i60nmk9vbahxxa1fg1h3u5jramayzfbxyaijtlixs0wn216fwpb0ced158x50n3357zbadrgocmu0mokptkchutdt6l8ftht50hmbukip6kq87sm8104mf8tp2ok0e451viz6m47d4l0h4fdimzt676t6xvlgsqojw7lcu7qmc1iy1eexr6otu6or6l1qba76ts3q0oyqmkb4swiecusrjs1ppu2onyk8w22s0oxlog9esne15frgouisvk14sfigm30s5zucw9hxbkx5vfdppvmaudictmat3jq2pqipl4mrz0b9xf1m2jm411w2zuu0mpxpvqtypgrxwx1mp4wt4x7zpgfdndcsmqpvb5kabsv9aah2dzr9zouo64xvrgnj80roaq81sz10mm5myyag3y0mmkk4tnwlng9nvfg8hysqpybf0pr78j6uqwmsacakep43jwkfcjr876uf4aooeloph4n79guberzrn8tb0uj6utxdmzh5hxniw1tpd1ayb1127rutmdap39fg43ylw6jdh2321ly2n3pnljh6vl9eqokfz7bm8bm4ci1osbo1kdut3gikcamy3fkwqix8zpqv654nuivpe1fqfyob5xq7meb0w26y17ndkrrwerct3lkzqfeffw50u0avjbwbfpsc2ua3an0tm6uplqofvabgujxgwlm1pljff7d9ly398ifvjhumxf2dw6q4xgft3ajt7z7s6z64dqe3dzxhstvoow5z 00:06:58.016 04:22:00 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:58.016 04:22:00 -- dd/basic_rw.sh@59 -- # gen_conf 00:06:58.016 04:22:00 -- dd/common.sh@31 -- # xtrace_disable 00:06:58.016 04:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:58.016 [2024-12-07 04:22:01.048873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.016 [2024-12-07 04:22:01.049553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57977 ] 00:06:58.016 { 00:06:58.016 "subsystems": [ 00:06:58.016 { 00:06:58.016 "subsystem": "bdev", 00:06:58.016 "config": [ 00:06:58.016 { 00:06:58.016 "params": { 00:06:58.016 "trtype": "pcie", 00:06:58.016 "traddr": "0000:00:06.0", 00:06:58.016 "name": "Nvme0" 00:06:58.016 }, 00:06:58.016 "method": "bdev_nvme_attach_controller" 00:06:58.016 }, 00:06:58.016 { 00:06:58.016 "method": "bdev_wait_for_examine" 00:06:58.016 } 00:06:58.016 ] 00:06:58.016 } 00:06:58.016 ] 00:06:58.016 } 00:06:58.016 [2024-12-07 04:22:01.187719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.016 [2024-12-07 04:22:01.237470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.275  [2024-12-07T04:22:01.775Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:58.535 00:06:58.535 04:22:01 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:58.535 04:22:01 -- dd/basic_rw.sh@65 -- # gen_conf 00:06:58.535 04:22:01 -- dd/common.sh@31 -- # xtrace_disable 00:06:58.535 04:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:58.535 [2024-12-07 04:22:01.572363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.535 [2024-12-07 04:22:01.572446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57995 ] 00:06:58.535 { 00:06:58.535 "subsystems": [ 00:06:58.535 { 00:06:58.535 "subsystem": "bdev", 00:06:58.535 "config": [ 00:06:58.535 { 00:06:58.535 "params": { 00:06:58.535 "trtype": "pcie", 00:06:58.535 "traddr": "0000:00:06.0", 00:06:58.535 "name": "Nvme0" 00:06:58.535 }, 00:06:58.535 "method": "bdev_nvme_attach_controller" 00:06:58.535 }, 00:06:58.535 { 00:06:58.535 "method": "bdev_wait_for_examine" 00:06:58.535 } 00:06:58.535 ] 00:06:58.535 } 00:06:58.535 ] 00:06:58.535 } 00:06:58.535 [2024-12-07 04:22:01.708667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.535 [2024-12-07 04:22:01.759270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.794  [2024-12-07T04:22:02.304Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:59.064 00:06:59.064 04:22:02 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:59.065 04:22:02 -- dd/basic_rw.sh@72 -- # [[ bjvtg6f20evpcszcy8l192fmivl5i7yscxddt3tephcc6ydpzf1s5l4mn1kt063dr856da1gs6tnzdgbdekk44rpglp69wo6llaq4qnaz0drgnvj05yc31lxpjevx1xjeikctglayll3vhiud3niphaqt14ph0mvzjwdmbhx181bbhyq50lu5ur4rlvfdqr6orxyvnxv1th8229ki6p8dapewsnhlc1fm2myx2hsw1yy9cxh2tok7zml39h42tfmehwmrc605iy4be53adq55l2npo3hc69nf1hw0cwa6psbdlm5jcqtq5rm9s5swst5vebi6g6pl4rh6rzn9jvwrkprqsbvhxnb202ogrou2shw6l3y0oagzyaxbdsig8ak3qg0bezrhhs33cquibp4ds214v4kw42eiwv766522xd0qjuyqtj88glbxf88qoltk0yna6utp08meudrxek2nml0lykfelbwoauwd2d6xmt98oqhko8oya4577hfk1yzknzzjogaga7k3g4b4le6dklyggfy28pbp4jm20lwkvcsa86bucw34s1nzfyzv4v9h9a2r14z59kgkit8phq3v35w3hs3d8p8ivwjied4mlw58xrpmjhdj5eh68sqhev35d54po71xlshrq0u36zgvn771ld1tqv2e62kx621r5pp9o3h5sifmp8vje5cj7cg41acylj7ny37o6x5ha1vu3lr5xniu4r7vcvbzgwnbjjv3fcj52yn7ifurb3zybrf36ynr2207ui4r01z7bh83z8s19dkd7euxz3qqlpb41jkw79t39duz4o7k9jpvhu4uflfdra0gyqyrrno07e41wl6srjz1mapyme4kfrso8kuzjxe3k1rqw5i1rqbki5vxy3acez9dimqned5pog9cieqc51s7xkb47qdswik0m55p3463c4ifs1rp9zmizba6gk8rki3lia0pwu64o995je10ovrezvgl1qhssx5unv692hdxgvkgzio1of6lm8nqgtvgdinb4iexyf9dn4tonbicprvuzsldaongyrs22pl2o0bdl28399nr5usrxeexrp6m8qjfvv6wfy5ru7efhqjk26kkawqln801495im46mnbt2h1iarwun01zrkgn6k2te7bjzrbzkpau3o2fgv0r2gf4um92ho7n4x1jrdbjkes0l9y8e1pfxk1og8uslovbic47dulhtk5hn2evjgbjjdua9dtuc5rfvfevrvbi3ep8yj6coish8yz91p6hnsfnh0tnyy3k4k0illb030alufxwli778bd2bxtzlpx8mv38cklpdj3349l4l4a2u8fm3r5s0q300lvm8wq8ko92ljyudn3c3aux2xm992ifwvgf9s6xi34bug84cr0k9vekj8t3yd40l5i726h333nhlxk1dr0tzgd30d5u5v6iy0naceyd5n2at43pv3fqetlovwgdf3zjjzivhu4nahtdb2nhcz9aeopcxqez17f0f4qmjdyv564v09l7g2mgvefa1h22smdhfqm7f0rw7g1sht1x7wr2ss2bvab1inmn2cnrctmc446lght1wi9s7vippaxs9g76ppm8pjr4hcmbnhez9mtwpqxbqkfkdajdpxtgobddxeo67p55ml21skyvyrsg3lt5teuxlc3nue9fq7k5t54m0tb3hpxuak62ufxqvfathb1w0210rb5fi4gqgkwutngpop6pcm3w0pyh2spbh1kw5y700j1udi7r04mxm3gsxduyygvj32jghppc98vfz99llk2raqta04wd0tdhgern8qm78onjsh76z9y1jaddacqqg3a2xdgzooee3vf40nidl3kqb2jjejdulphw8t97eq34ftu14e8hpj4azoc13am1x8vtcgsbofc2ug8b2055106290etbwm7pncpd4p1lc7bxlh5mbnx2mc79gp4mu5h087u8zjc32h0rv1rom0qx7kqqd3qvyq2mpw6hp1rcuutnyijak5j4ya0yizrpld5icmzie1bquzxgw832wg07fx3daivnvvsd1nbf0t841gjlqnu1o330u3m3f1rt200iq96x0tqwvdhamxrb96wjslc3ot0zqt7v1am3z8ndcw7ngnkaxqqkmu0tbol328j844l4ln84xw6arr184hid97rxbhpuaxpb270k5d3bkq47z1okm4iww1hd29e3jzprvuwy3s22vjiq2yboor3s1qa1xcgcxtnaqi9ea70busi69iwp8u5s978x7mdpanu00sqim6jl3xnpuz3xe04gw79nwrkd1dqw1w8e5wntpa23yev2w0v9hzsapi8zyekwxx4xzpey8d19o0o3vbbogu8otuyk9rdgft7nxqv4vhu0ld050tsdtbl7vezr2jmtv8li78qks7461fm4kld4o5m64rndq9iqr1w3i421d285758veurq3gvobuvvwkjd883ztm25teos099d1juu92wy6b2j76mvtcfytxieufpxx22jub4k17a8gaz6il4yhbdryerm1i7muxfzl1i3405ztld0zii9sla04g9gsfr2z0ciki4igbtg434q7xaf9g0m0ebrqd7tm37nr19jyvfgs4nvfbb6y24frzho23xl40j3fq9h8fe4v1izud9d32wqzgyou7j7217jdfpgi39abs77brh7rxg1rt2fsjej090pz5wciampf4vastqvfqyuit05v5xbpilmbucqugvcq3qngjn1j97oedi4kw4osan8yozcevjaxlern1t10qxzfrtvoaptl17aftk8ke3fml6rrjqi0rtt2d8u8acnudeme5f50i8qg18g3aujsi8dqve8ql2lv2wxrne8a0gwg03p5tz4dry63454g4h13p8fx3cbwfk9ukyvlnf6vkbh2q8b3pwxts1rddwnr3dglzv2sv18p05s79yh8irpju6cjb2tbp1j86tdukywp96d57zq2isqzluvclvlxkfkj0tsj9n4010ij7yyja5kivvadu9992q9cgxsqufsu22vqr9jk5p8rpx6brqu4eytmysznv0ja07k1vp2b25mduwt3x6ouzcgnoq7hnqei4kxpk4ubguuzzq3oapm4yo6k4oll7f2sq9fg5fbg25u7h4nojamn6f0pz2gzbej93lcy54ko2b79xpd5x6s11ylky9qzu70zxarc6cxgg3d7wz27xsve1t8r8voevpzu2o7q8m9almjp183i64zlzl0scfzyv608meiym77lsbfbajdk1y8zril4u6jsdwqlmgphbmk7tfm6wf62e55d4i60nmk9vbahxxa1fg1h3u5jramayzfbxyaijtlixs0wn216fwpb0ced158x50n3357zbadrgocmu0mokptkchutdt6l8ftht50hmbukip6kq87sm8104mf8tp2ok0e451viz6m47d4l0h4fdimzt676t6xvlgsqojw7lcu7qmc1iy1eexr6otu6or6l1qba76ts3q0oyqmkb4swiecusrjs1ppu2onyk8w22s0oxlog9esne15frgouisvk14sfigm30s5zucw9hxbkx5vfdppvmaudictmat3jq2pqipl4mrz0b9xf1m2jm411w2zuu0mpxpvqtypgrxwx1mp4wt4x7zpgfdndcsmqpvb5kabsv9aah2dzr9zouo64xvrgnj80roaq81sz10mm5myyag3y0mmkk4tnwlng9nvfg8hysqpybf0pr78j6uqwmsacakep43jwkfcjr876uf4aooeloph4n79guberzrn8tb0uj6utxdmzh5hxniw1tpd1ayb1127rutmdap39fg43ylw6jdh2321ly2n3pnljh6vl9eqokfz7bm8bm4ci1osbo1kdut3gikcamy3fkwqix8zpqv654nuivpe1fqfyob5xq7meb0w26y17ndkrrwerct3lkzqfeffw50u0avjbwbfpsc2ua3an0tm6uplqofvabgujxgwlm1pljff7d9ly398ifvjhumxf2dw6q4xgft3ajt7z7s6z64dqe3dzxhstvoow5z == \b\j\v\t\g\6\f\2\0\e\v\p\c\s\z\c\y\8\l\1\9\2\f\m\i\v\l\5\i\7\y\s\c\x\d\d\t\3\t\e\p\h\c\c\6\y\d\p\z\f\1\s\5\l\4\m\n\1\k\t\0\6\3\d\r\8\5\6\d\a\1\g\s\6\t\n\z\d\g\b\d\e\k\k\4\4\r\p\g\l\p\6\9\w\o\6\l\l\a\q\4\q\n\a\z\0\d\r\g\n\v\j\0\5\y\c\3\1\l\x\p\j\e\v\x\1\x\j\e\i\k\c\t\g\l\a\y\l\l\3\v\h\i\u\d\3\n\i\p\h\a\q\t\1\4\p\h\0\m\v\z\j\w\d\m\b\h\x\1\8\1\b\b\h\y\q\5\0\l\u\5\u\r\4\r\l\v\f\d\q\r\6\o\r\x\y\v\n\x\v\1\t\h\8\2\2\9\k\i\6\p\8\d\a\p\e\w\s\n\h\l\c\1\f\m\2\m\y\x\2\h\s\w\1\y\y\9\c\x\h\2\t\o\k\7\z\m\l\3\9\h\4\2\t\f\m\e\h\w\m\r\c\6\0\5\i\y\4\b\e\5\3\a\d\q\5\5\l\2\n\p\o\3\h\c\6\9\n\f\1\h\w\0\c\w\a\6\p\s\b\d\l\m\5\j\c\q\t\q\5\r\m\9\s\5\s\w\s\t\5\v\e\b\i\6\g\6\p\l\4\r\h\6\r\z\n\9\j\v\w\r\k\p\r\q\s\b\v\h\x\n\b\2\0\2\o\g\r\o\u\2\s\h\w\6\l\3\y\0\o\a\g\z\y\a\x\b\d\s\i\g\8\a\k\3\q\g\0\b\e\z\r\h\h\s\3\3\c\q\u\i\b\p\4\d\s\2\1\4\v\4\k\w\4\2\e\i\w\v\7\6\6\5\2\2\x\d\0\q\j\u\y\q\t\j\8\8\g\l\b\x\f\8\8\q\o\l\t\k\0\y\n\a\6\u\t\p\0\8\m\e\u\d\r\x\e\k\2\n\m\l\0\l\y\k\f\e\l\b\w\o\a\u\w\d\2\d\6\x\m\t\9\8\o\q\h\k\o\8\o\y\a\4\5\7\7\h\f\k\1\y\z\k\n\z\z\j\o\g\a\g\a\7\k\3\g\4\b\4\l\e\6\d\k\l\y\g\g\f\y\2\8\p\b\p\4\j\m\2\0\l\w\k\v\c\s\a\8\6\b\u\c\w\3\4\s\1\n\z\f\y\z\v\4\v\9\h\9\a\2\r\1\4\z\5\9\k\g\k\i\t\8\p\h\q\3\v\3\5\w\3\h\s\3\d\8\p\8\i\v\w\j\i\e\d\4\m\l\w\5\8\x\r\p\m\j\h\d\j\5\e\h\6\8\s\q\h\e\v\3\5\d\5\4\p\o\7\1\x\l\s\h\r\q\0\u\3\6\z\g\v\n\7\7\1\l\d\1\t\q\v\2\e\6\2\k\x\6\2\1\r\5\p\p\9\o\3\h\5\s\i\f\m\p\8\v\j\e\5\c\j\7\c\g\4\1\a\c\y\l\j\7\n\y\3\7\o\6\x\5\h\a\1\v\u\3\l\r\5\x\n\i\u\4\r\7\v\c\v\b\z\g\w\n\b\j\j\v\3\f\c\j\5\2\y\n\7\i\f\u\r\b\3\z\y\b\r\f\3\6\y\n\r\2\2\0\7\u\i\4\r\0\1\z\7\b\h\8\3\z\8\s\1\9\d\k\d\7\e\u\x\z\3\q\q\l\p\b\4\1\j\k\w\7\9\t\3\9\d\u\z\4\o\7\k\9\j\p\v\h\u\4\u\f\l\f\d\r\a\0\g\y\q\y\r\r\n\o\0\7\e\4\1\w\l\6\s\r\j\z\1\m\a\p\y\m\e\4\k\f\r\s\o\8\k\u\z\j\x\e\3\k\1\r\q\w\5\i\1\r\q\b\k\i\5\v\x\y\3\a\c\e\z\9\d\i\m\q\n\e\d\5\p\o\g\9\c\i\e\q\c\5\1\s\7\x\k\b\4\7\q\d\s\w\i\k\0\m\5\5\p\3\4\6\3\c\4\i\f\s\1\r\p\9\z\m\i\z\b\a\6\g\k\8\r\k\i\3\l\i\a\0\p\w\u\6\4\o\9\9\5\j\e\1\0\o\v\r\e\z\v\g\l\1\q\h\s\s\x\5\u\n\v\6\9\2\h\d\x\g\v\k\g\z\i\o\1\o\f\6\l\m\8\n\q\g\t\v\g\d\i\n\b\4\i\e\x\y\f\9\d\n\4\t\o\n\b\i\c\p\r\v\u\z\s\l\d\a\o\n\g\y\r\s\2\2\p\l\2\o\0\b\d\l\2\8\3\9\9\n\r\5\u\s\r\x\e\e\x\r\p\6\m\8\q\j\f\v\v\6\w\f\y\5\r\u\7\e\f\h\q\j\k\2\6\k\k\a\w\q\l\n\8\0\1\4\9\5\i\m\4\6\m\n\b\t\2\h\1\i\a\r\w\u\n\0\1\z\r\k\g\n\6\k\2\t\e\7\b\j\z\r\b\z\k\p\a\u\3\o\2\f\g\v\0\r\2\g\f\4\u\m\9\2\h\o\7\n\4\x\1\j\r\d\b\j\k\e\s\0\l\9\y\8\e\1\p\f\x\k\1\o\g\8\u\s\l\o\v\b\i\c\4\7\d\u\l\h\t\k\5\h\n\2\e\v\j\g\b\j\j\d\u\a\9\d\t\u\c\5\r\f\v\f\e\v\r\v\b\i\3\e\p\8\y\j\6\c\o\i\s\h\8\y\z\9\1\p\6\h\n\s\f\n\h\0\t\n\y\y\3\k\4\k\0\i\l\l\b\0\3\0\a\l\u\f\x\w\l\i\7\7\8\b\d\2\b\x\t\z\l\p\x\8\m\v\3\8\c\k\l\p\d\j\3\3\4\9\l\4\l\4\a\2\u\8\f\m\3\r\5\s\0\q\3\0\0\l\v\m\8\w\q\8\k\o\9\2\l\j\y\u\d\n\3\c\3\a\u\x\2\x\m\9\9\2\i\f\w\v\g\f\9\s\6\x\i\3\4\b\u\g\8\4\c\r\0\k\9\v\e\k\j\8\t\3\y\d\4\0\l\5\i\7\2\6\h\3\3\3\n\h\l\x\k\1\d\r\0\t\z\g\d\3\0\d\5\u\5\v\6\i\y\0\n\a\c\e\y\d\5\n\2\a\t\4\3\p\v\3\f\q\e\t\l\o\v\w\g\d\f\3\z\j\j\z\i\v\h\u\4\n\a\h\t\d\b\2\n\h\c\z\9\a\e\o\p\c\x\q\e\z\1\7\f\0\f\4\q\m\j\d\y\v\5\6\4\v\0\9\l\7\g\2\m\g\v\e\f\a\1\h\2\2\s\m\d\h\f\q\m\7\f\0\r\w\7\g\1\s\h\t\1\x\7\w\r\2\s\s\2\b\v\a\b\1\i\n\m\n\2\c\n\r\c\t\m\c\4\4\6\l\g\h\t\1\w\i\9\s\7\v\i\p\p\a\x\s\9\g\7\6\p\p\m\8\p\j\r\4\h\c\m\b\n\h\e\z\9\m\t\w\p\q\x\b\q\k\f\k\d\a\j\d\p\x\t\g\o\b\d\d\x\e\o\6\7\p\5\5\m\l\2\1\s\k\y\v\y\r\s\g\3\l\t\5\t\e\u\x\l\c\3\n\u\e\9\f\q\7\k\5\t\5\4\m\0\t\b\3\h\p\x\u\a\k\6\2\u\f\x\q\v\f\a\t\h\b\1\w\0\2\1\0\r\b\5\f\i\4\g\q\g\k\w\u\t\n\g\p\o\p\6\p\c\m\3\w\0\p\y\h\2\s\p\b\h\1\k\w\5\y\7\0\0\j\1\u\d\i\7\r\0\4\m\x\m\3\g\s\x\d\u\y\y\g\v\j\3\2\j\g\h\p\p\c\9\8\v\f\z\9\9\l\l\k\2\r\a\q\t\a\0\4\w\d\0\t\d\h\g\e\r\n\8\q\m\7\8\o\n\j\s\h\7\6\z\9\y\1\j\a\d\d\a\c\q\q\g\3\a\2\x\d\g\z\o\o\e\e\3\v\f\4\0\n\i\d\l\3\k\q\b\2\j\j\e\j\d\u\l\p\h\w\8\t\9\7\e\q\3\4\f\t\u\1\4\e\8\h\p\j\4\a\z\o\c\1\3\a\m\1\x\8\v\t\c\g\s\b\o\f\c\2\u\g\8\b\2\0\5\5\1\0\6\2\9\0\e\t\b\w\m\7\p\n\c\p\d\4\p\1\l\c\7\b\x\l\h\5\m\b\n\x\2\m\c\7\9\g\p\4\m\u\5\h\0\8\7\u\8\z\j\c\3\2\h\0\r\v\1\r\o\m\0\q\x\7\k\q\q\d\3\q\v\y\q\2\m\p\w\6\h\p\1\r\c\u\u\t\n\y\i\j\a\k\5\j\4\y\a\0\y\i\z\r\p\l\d\5\i\c\m\z\i\e\1\b\q\u\z\x\g\w\8\3\2\w\g\0\7\f\x\3\d\a\i\v\n\v\v\s\d\1\n\b\f\0\t\8\4\1\g\j\l\q\n\u\1\o\3\3\0\u\3\m\3\f\1\r\t\2\0\0\i\q\9\6\x\0\t\q\w\v\d\h\a\m\x\r\b\9\6\w\j\s\l\c\3\o\t\0\z\q\t\7\v\1\a\m\3\z\8\n\d\c\w\7\n\g\n\k\a\x\q\q\k\m\u\0\t\b\o\l\3\2\8\j\8\4\4\l\4\l\n\8\4\x\w\6\a\r\r\1\8\4\h\i\d\9\7\r\x\b\h\p\u\a\x\p\b\2\7\0\k\5\d\3\b\k\q\4\7\z\1\o\k\m\4\i\w\w\1\h\d\2\9\e\3\j\z\p\r\v\u\w\y\3\s\2\2\v\j\i\q\2\y\b\o\o\r\3\s\1\q\a\1\x\c\g\c\x\t\n\a\q\i\9\e\a\7\0\b\u\s\i\6\9\i\w\p\8\u\5\s\9\7\8\x\7\m\d\p\a\n\u\0\0\s\q\i\m\6\j\l\3\x\n\p\u\z\3\x\e\0\4\g\w\7\9\n\w\r\k\d\1\d\q\w\1\w\8\e\5\w\n\t\p\a\2\3\y\e\v\2\w\0\v\9\h\z\s\a\p\i\8\z\y\e\k\w\x\x\4\x\z\p\e\y\8\d\1\9\o\0\o\3\v\b\b\o\g\u\8\o\t\u\y\k\9\r\d\g\f\t\7\n\x\q\v\4\v\h\u\0\l\d\0\5\0\t\s\d\t\b\l\7\v\e\z\r\2\j\m\t\v\8\l\i\7\8\q\k\s\7\4\6\1\f\m\4\k\l\d\4\o\5\m\6\4\r\n\d\q\9\i\q\r\1\w\3\i\4\2\1\d\2\8\5\7\5\8\v\e\u\r\q\3\g\v\o\b\u\v\v\w\k\j\d\8\8\3\z\t\m\2\5\t\e\o\s\0\9\9\d\1\j\u\u\9\2\w\y\6\b\2\j\7\6\m\v\t\c\f\y\t\x\i\e\u\f\p\x\x\2\2\j\u\b\4\k\1\7\a\8\g\a\z\6\i\l\4\y\h\b\d\r\y\e\r\m\1\i\7\m\u\x\f\z\l\1\i\3\4\0\5\z\t\l\d\0\z\i\i\9\s\l\a\0\4\g\9\g\s\f\r\2\z\0\c\i\k\i\4\i\g\b\t\g\4\3\4\q\7\x\a\f\9\g\0\m\0\e\b\r\q\d\7\t\m\3\7\n\r\1\9\j\y\v\f\g\s\4\n\v\f\b\b\6\y\2\4\f\r\z\h\o\2\3\x\l\4\0\j\3\f\q\9\h\8\f\e\4\v\1\i\z\u\d\9\d\3\2\w\q\z\g\y\o\u\7\j\7\2\1\7\j\d\f\p\g\i\3\9\a\b\s\7\7\b\r\h\7\r\x\g\1\r\t\2\f\s\j\e\j\0\9\0\p\z\5\w\c\i\a\m\p\f\4\v\a\s\t\q\v\f\q\y\u\i\t\0\5\v\5\x\b\p\i\l\m\b\u\c\q\u\g\v\c\q\3\q\n\g\j\n\1\j\9\7\o\e\d\i\4\k\w\4\o\s\a\n\8\y\o\z\c\e\v\j\a\x\l\e\r\n\1\t\1\0\q\x\z\f\r\t\v\o\a\p\t\l\1\7\a\f\t\k\8\k\e\3\f\m\l\6\r\r\j\q\i\0\r\t\t\2\d\8\u\8\a\c\n\u\d\e\m\e\5\f\5\0\i\8\q\g\1\8\g\3\a\u\j\s\i\8\d\q\v\e\8\q\l\2\l\v\2\w\x\r\n\e\8\a\0\g\w\g\0\3\p\5\t\z\4\d\r\y\6\3\4\5\4\g\4\h\1\3\p\8\f\x\3\c\b\w\f\k\9\u\k\y\v\l\n\f\6\v\k\b\h\2\q\8\b\3\p\w\x\t\s\1\r\d\d\w\n\r\3\d\g\l\z\v\2\s\v\1\8\p\0\5\s\7\9\y\h\8\i\r\p\j\u\6\c\j\b\2\t\b\p\1\j\8\6\t\d\u\k\y\w\p\9\6\d\5\7\z\q\2\i\s\q\z\l\u\v\c\l\v\l\x\k\f\k\j\0\t\s\j\9\n\4\0\1\0\i\j\7\y\y\j\a\5\k\i\v\v\a\d\u\9\9\9\2\q\9\c\g\x\s\q\u\f\s\u\2\2\v\q\r\9\j\k\5\p\8\r\p\x\6\b\r\q\u\4\e\y\t\m\y\s\z\n\v\0\j\a\0\7\k\1\v\p\2\b\2\5\m\d\u\w\t\3\x\6\o\u\z\c\g\n\o\q\7\h\n\q\e\i\4\k\x\p\k\4\u\b\g\u\u\z\z\q\3\o\a\p\m\4\y\o\6\k\4\o\l\l\7\f\2\s\q\9\f\g\5\f\b\g\2\5\u\7\h\4\n\o\j\a\m\n\6\f\0\p\z\2\g\z\b\e\j\9\3\l\c\y\5\4\k\o\2\b\7\9\x\p\d\5\x\6\s\1\1\y\l\k\y\9\q\z\u\7\0\z\x\a\r\c\6\c\x\g\g\3\d\7\w\z\2\7\x\s\v\e\1\t\8\r\8\v\o\e\v\p\z\u\2\o\7\q\8\m\9\a\l\m\j\p\1\8\3\i\6\4\z\l\z\l\0\s\c\f\z\y\v\6\0\8\m\e\i\y\m\7\7\l\s\b\f\b\a\j\d\k\1\y\8\z\r\i\l\4\u\6\j\s\d\w\q\l\m\g\p\h\b\m\k\7\t\f\m\6\w\f\6\2\e\5\5\d\4\i\6\0\n\m\k\9\v\b\a\h\x\x\a\1\f\g\1\h\3\u\5\j\r\a\m\a\y\z\f\b\x\y\a\i\j\t\l\i\x\s\0\w\n\2\1\6\f\w\p\b\0\c\e\d\1\5\8\x\5\0\n\3\3\5\7\z\b\a\d\r\g\o\c\m\u\0\m\o\k\p\t\k\c\h\u\t\d\t\6\l\8\f\t\h\t\5\0\h\m\b\u\k\i\p\6\k\q\8\7\s\m\8\1\0\4\m\f\8\t\p\2\o\k\0\e\4\5\1\v\i\z\6\m\4\7\d\4\l\0\h\4\f\d\i\m\z\t\6\7\6\t\6\x\v\l\g\s\q\o\j\w\7\l\c\u\7\q\m\c\1\i\y\1\e\e\x\r\6\o\t\u\6\o\r\6\l\1\q\b\a\7\6\t\s\3\q\0\o\y\q\m\k\b\4\s\w\i\e\c\u\s\r\j\s\1\p\p\u\2\o\n\y\k\8\w\2\2\s\0\o\x\l\o\g\9\e\s\n\e\1\5\f\r\g\o\u\i\s\v\k\1\4\s\f\i\g\m\3\0\s\5\z\u\c\w\9\h\x\b\k\x\5\v\f\d\p\p\v\m\a\u\d\i\c\t\m\a\t\3\j\q\2\p\q\i\p\l\4\m\r\z\0\b\9\x\f\1\m\2\j\m\4\1\1\w\2\z\u\u\0\m\p\x\p\v\q\t\y\p\g\r\x\w\x\1\m\p\4\w\t\4\x\7\z\p\g\f\d\n\d\c\s\m\q\p\v\b\5\k\a\b\s\v\9\a\a\h\2\d\z\r\9\z\o\u\o\6\4\x\v\r\g\n\j\8\0\r\o\a\q\8\1\s\z\1\0\m\m\5\m\y\y\a\g\3\y\0\m\m\k\k\4\t\n\w\l\n\g\9\n\v\f\g\8\h\y\s\q\p\y\b\f\0\p\r\7\8\j\6\u\q\w\m\s\a\c\a\k\e\p\4\3\j\w\k\f\c\j\r\8\7\6\u\f\4\a\o\o\e\l\o\p\h\4\n\7\9\g\u\b\e\r\z\r\n\8\t\b\0\u\j\6\u\t\x\d\m\z\h\5\h\x\n\i\w\1\t\p\d\1\a\y\b\1\1\2\7\r\u\t\m\d\a\p\3\9\f\g\4\3\y\l\w\6\j\d\h\2\3\2\1\l\y\2\n\3\p\n\l\j\h\6\v\l\9\e\q\o\k\f\z\7\b\m\8\b\m\4\c\i\1\o\s\b\o\1\k\d\u\t\3\g\i\k\c\a\m\y\3\f\k\w\q\i\x\8\z\p\q\v\6\5\4\n\u\i\v\p\e\1\f\q\f\y\o\b\5\x\q\7\m\e\b\0\w\2\6\y\1\7\n\d\k\r\r\w\e\r\c\t\3\l\k\z\q\f\e\f\f\w\5\0\u\0\a\v\j\b\w\b\f\p\s\c\2\u\a\3\a\n\0\t\m\6\u\p\l\q\o\f\v\a\b\g\u\j\x\g\w\l\m\1\p\l\j\f\f\7\d\9\l\y\3\9\8\i\f\v\j\h\u\m\x\f\2\d\w\6\q\4\x\g\f\t\3\a\j\t\7\z\7\s\6\z\6\4\d\q\e\3\d\z\x\h\s\t\v\o\o\w\5\z ]] 00:06:59.065 00:06:59.065 real 0m1.108s 00:06:59.065 user 0m0.763s 00:06:59.065 sys 0m0.220s 00:06:59.065 04:22:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.065 04:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:59.065 ************************************ 00:06:59.065 END TEST dd_rw_offset 00:06:59.065 ************************************ 00:06:59.065 04:22:02 -- dd/basic_rw.sh@1 -- # cleanup 00:06:59.065 04:22:02 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:59.065 04:22:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:59.065 04:22:02 -- dd/common.sh@11 -- # local nvme_ref= 00:06:59.065 04:22:02 -- dd/common.sh@12 -- # local size=0xffff 00:06:59.065 04:22:02 -- dd/common.sh@14 -- # local bs=1048576 00:06:59.065 04:22:02 -- dd/common.sh@15 -- # local count=1 00:06:59.065 04:22:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:59.065 04:22:02 -- dd/common.sh@18 -- # gen_conf 00:06:59.065 04:22:02 -- dd/common.sh@31 -- # xtrace_disable 00:06:59.065 04:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:59.065 [2024-12-07 04:22:02.147046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.065 [2024-12-07 04:22:02.147288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58017 ] 00:06:59.065 { 00:06:59.065 "subsystems": [ 00:06:59.065 { 00:06:59.065 "subsystem": "bdev", 00:06:59.065 "config": [ 00:06:59.065 { 00:06:59.065 "params": { 00:06:59.065 "trtype": "pcie", 00:06:59.065 "traddr": "0000:00:06.0", 00:06:59.065 "name": "Nvme0" 00:06:59.065 }, 00:06:59.065 "method": "bdev_nvme_attach_controller" 00:06:59.065 }, 00:06:59.065 { 00:06:59.065 "method": "bdev_wait_for_examine" 00:06:59.065 } 00:06:59.065 ] 00:06:59.065 } 00:06:59.065 ] 00:06:59.065 } 00:06:59.065 [2024-12-07 04:22:02.280465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.350 [2024-12-07 04:22:02.335668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.350  [2024-12-07T04:22:02.865Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:59.625 00:06:59.625 04:22:02 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.625 ************************************ 00:06:59.625 END TEST spdk_dd_basic_rw 00:06:59.625 ************************************ 00:06:59.625 00:06:59.625 real 0m15.828s 00:06:59.625 user 0m11.573s 00:06:59.625 sys 0m2.830s 00:06:59.625 04:22:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.625 04:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:59.625 04:22:02 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:59.625 04:22:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.625 04:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.625 04:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:59.625 ************************************ 00:06:59.625 START TEST spdk_dd_posix 00:06:59.625 ************************************ 00:06:59.625 04:22:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:59.625 * Looking for test storage... 00:06:59.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.625 04:22:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:59.625 04:22:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:59.625 04:22:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:59.625 04:22:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:59.625 04:22:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:59.625 04:22:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:59.625 04:22:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:59.625 04:22:02 -- scripts/common.sh@335 -- # IFS=.-: 00:06:59.625 04:22:02 -- scripts/common.sh@335 -- # read -ra ver1 00:06:59.625 04:22:02 -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.625 04:22:02 -- scripts/common.sh@336 -- # read -ra ver2 00:06:59.625 04:22:02 -- scripts/common.sh@337 -- # local 'op=<' 00:06:59.625 04:22:02 -- scripts/common.sh@339 -- # ver1_l=2 00:06:59.625 04:22:02 -- scripts/common.sh@340 -- # ver2_l=1 00:06:59.625 04:22:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:59.625 04:22:02 -- scripts/common.sh@343 -- # case "$op" in 00:06:59.625 04:22:02 -- scripts/common.sh@344 -- # : 1 00:06:59.625 04:22:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:59.625 04:22:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.625 04:22:02 -- scripts/common.sh@364 -- # decimal 1 00:06:59.625 04:22:02 -- scripts/common.sh@352 -- # local d=1 00:06:59.625 04:22:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.625 04:22:02 -- scripts/common.sh@354 -- # echo 1 00:06:59.625 04:22:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:59.625 04:22:02 -- scripts/common.sh@365 -- # decimal 2 00:06:59.625 04:22:02 -- scripts/common.sh@352 -- # local d=2 00:06:59.625 04:22:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.625 04:22:02 -- scripts/common.sh@354 -- # echo 2 00:06:59.625 04:22:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:59.625 04:22:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:59.625 04:22:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:59.625 04:22:02 -- scripts/common.sh@367 -- # return 0 00:06:59.625 04:22:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.625 04:22:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:59.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.625 --rc genhtml_branch_coverage=1 00:06:59.625 --rc genhtml_function_coverage=1 00:06:59.625 --rc genhtml_legend=1 00:06:59.625 --rc geninfo_all_blocks=1 00:06:59.625 --rc geninfo_unexecuted_blocks=1 00:06:59.625 00:06:59.625 ' 00:06:59.625 04:22:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:59.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.625 --rc genhtml_branch_coverage=1 00:06:59.625 --rc genhtml_function_coverage=1 00:06:59.625 --rc genhtml_legend=1 00:06:59.625 --rc geninfo_all_blocks=1 00:06:59.625 --rc geninfo_unexecuted_blocks=1 00:06:59.625 00:06:59.625 ' 00:06:59.625 04:22:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:59.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.625 --rc genhtml_branch_coverage=1 00:06:59.625 --rc genhtml_function_coverage=1 00:06:59.625 --rc genhtml_legend=1 00:06:59.625 --rc geninfo_all_blocks=1 00:06:59.625 --rc geninfo_unexecuted_blocks=1 00:06:59.625 00:06:59.625 ' 00:06:59.625 04:22:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:59.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.625 --rc genhtml_branch_coverage=1 00:06:59.625 --rc genhtml_function_coverage=1 00:06:59.625 --rc genhtml_legend=1 00:06:59.625 --rc geninfo_all_blocks=1 00:06:59.625 --rc geninfo_unexecuted_blocks=1 00:06:59.625 00:06:59.625 ' 00:06:59.625 04:22:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.625 04:22:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.625 04:22:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.625 04:22:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.625 04:22:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 04:22:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 04:22:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 04:22:02 -- paths/export.sh@5 -- # export PATH 00:06:59.626 04:22:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 04:22:02 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:59.626 04:22:02 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:59.626 04:22:02 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:59.626 04:22:02 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:59.626 04:22:02 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.626 04:22:02 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.626 04:22:02 -- dd/posix.sh@130 -- # tests 00:06:59.626 04:22:02 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:59.626 * First test run, liburing in use 00:06:59.626 04:22:02 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:59.626 04:22:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.626 04:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.626 04:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:59.885 ************************************ 00:06:59.885 START TEST dd_flag_append 00:06:59.885 ************************************ 00:06:59.885 04:22:02 -- common/autotest_common.sh@1114 -- # append 00:06:59.885 04:22:02 -- dd/posix.sh@16 -- # local dump0 00:06:59.885 04:22:02 -- dd/posix.sh@17 -- # local dump1 00:06:59.885 04:22:02 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:59.885 04:22:02 -- dd/common.sh@98 -- # xtrace_disable 00:06:59.885 04:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:59.885 04:22:02 -- dd/posix.sh@19 -- # dump0=yemlv2qx4uin0z5009fl48dx7deb42zk 00:06:59.885 04:22:02 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:59.885 04:22:02 -- dd/common.sh@98 -- # xtrace_disable 00:06:59.885 04:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:59.885 04:22:02 -- dd/posix.sh@20 -- # dump1=7qfis9glugtebe52mqnzbco44yq8pqjb 00:06:59.885 04:22:02 -- dd/posix.sh@22 -- # printf %s yemlv2qx4uin0z5009fl48dx7deb42zk 00:06:59.885 04:22:02 -- dd/posix.sh@23 -- # printf %s 7qfis9glugtebe52mqnzbco44yq8pqjb 00:06:59.885 04:22:02 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:59.885 [2024-12-07 04:22:02.916277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.885 [2024-12-07 04:22:02.916497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58087 ] 00:06:59.885 [2024-12-07 04:22:03.044968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.885 [2024-12-07 04:22:03.097974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.144  [2024-12-07T04:22:03.384Z] Copying: 32/32 [B] (average 31 kBps) 00:07:00.144 00:07:00.144 04:22:03 -- dd/posix.sh@27 -- # [[ 7qfis9glugtebe52mqnzbco44yq8pqjbyemlv2qx4uin0z5009fl48dx7deb42zk == \7\q\f\i\s\9\g\l\u\g\t\e\b\e\5\2\m\q\n\z\b\c\o\4\4\y\q\8\p\q\j\b\y\e\m\l\v\2\q\x\4\u\i\n\0\z\5\0\0\9\f\l\4\8\d\x\7\d\e\b\4\2\z\k ]] 00:07:00.144 00:07:00.144 real 0m0.441s 00:07:00.144 user 0m0.232s 00:07:00.144 sys 0m0.089s 00:07:00.144 04:22:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.144 ************************************ 00:07:00.144 END TEST dd_flag_append 00:07:00.144 ************************************ 00:07:00.144 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:00.144 04:22:03 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:00.144 04:22:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.145 04:22:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.145 04:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:00.145 ************************************ 00:07:00.145 START TEST dd_flag_directory 00:07:00.145 ************************************ 00:07:00.145 04:22:03 -- common/autotest_common.sh@1114 -- # directory 00:07:00.145 04:22:03 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.145 04:22:03 -- common/autotest_common.sh@650 -- # local es=0 00:07:00.145 04:22:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.145 04:22:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.145 04:22:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.145 04:22:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.145 04:22:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.145 04:22:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.145 04:22:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.145 04:22:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.145 04:22:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.145 04:22:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.404 [2024-12-07 04:22:03.414940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.404 [2024-12-07 04:22:03.415029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58114 ] 00:07:00.404 [2024-12-07 04:22:03.550135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.404 [2024-12-07 04:22:03.602521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.663 [2024-12-07 04:22:03.648038] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.663 [2024-12-07 04:22:03.648092] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.663 [2024-12-07 04:22:03.648120] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.663 [2024-12-07 04:22:03.708351] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:00.663 04:22:03 -- common/autotest_common.sh@653 -- # es=236 00:07:00.663 04:22:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.663 04:22:03 -- common/autotest_common.sh@662 -- # es=108 00:07:00.663 04:22:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:00.663 04:22:03 -- common/autotest_common.sh@670 -- # es=1 00:07:00.663 04:22:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.663 04:22:03 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:00.663 04:22:03 -- common/autotest_common.sh@650 -- # local es=0 00:07:00.663 04:22:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:00.663 04:22:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.663 04:22:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.663 04:22:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.663 04:22:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.663 04:22:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.663 04:22:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.663 04:22:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.663 04:22:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.663 04:22:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:00.663 [2024-12-07 04:22:03.870652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.663 [2024-12-07 04:22:03.870739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58123 ] 00:07:00.923 [2024-12-07 04:22:03.998387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.923 [2024-12-07 04:22:04.046206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.923 [2024-12-07 04:22:04.089615] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.923 [2024-12-07 04:22:04.089690] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:00.923 [2024-12-07 04:22:04.089719] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.923 [2024-12-07 04:22:04.147291] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:01.182 04:22:04 -- common/autotest_common.sh@653 -- # es=236 00:07:01.182 04:22:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.182 04:22:04 -- common/autotest_common.sh@662 -- # es=108 00:07:01.182 04:22:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.182 04:22:04 -- common/autotest_common.sh@670 -- # es=1 00:07:01.182 04:22:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.182 00:07:01.182 real 0m0.881s 00:07:01.182 user 0m0.489s 00:07:01.182 sys 0m0.183s 00:07:01.182 04:22:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.182 04:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:01.182 ************************************ 00:07:01.182 END TEST dd_flag_directory 00:07:01.182 ************************************ 00:07:01.182 04:22:04 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:01.182 04:22:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.182 04:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.182 04:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:01.182 ************************************ 00:07:01.182 START TEST dd_flag_nofollow 00:07:01.182 ************************************ 00:07:01.182 04:22:04 -- common/autotest_common.sh@1114 -- # nofollow 00:07:01.182 04:22:04 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.182 04:22:04 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.182 04:22:04 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.182 04:22:04 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.182 04:22:04 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.182 04:22:04 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.182 04:22:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.182 04:22:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.182 04:22:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.182 04:22:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.182 04:22:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.182 04:22:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.182 04:22:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.182 04:22:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.182 04:22:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.182 04:22:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.182 [2024-12-07 04:22:04.353462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.182 [2024-12-07 04:22:04.353553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58149 ] 00:07:01.442 [2024-12-07 04:22:04.491850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.442 [2024-12-07 04:22:04.559308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.442 [2024-12-07 04:22:04.611450] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:01.442 [2024-12-07 04:22:04.611512] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:01.442 [2024-12-07 04:22:04.611536] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.442 [2024-12-07 04:22:04.678659] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:01.702 04:22:04 -- common/autotest_common.sh@653 -- # es=216 00:07:01.702 04:22:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.702 04:22:04 -- common/autotest_common.sh@662 -- # es=88 00:07:01.702 04:22:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.702 04:22:04 -- common/autotest_common.sh@670 -- # es=1 00:07:01.702 04:22:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.702 04:22:04 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.702 04:22:04 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.702 04:22:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.702 04:22:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.702 04:22:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.702 04:22:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.702 04:22:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.702 04:22:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.702 04:22:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.702 04:22:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.702 04:22:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.702 04:22:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:01.702 [2024-12-07 04:22:04.832916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.702 [2024-12-07 04:22:04.833029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58161 ] 00:07:01.962 [2024-12-07 04:22:04.967989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.962 [2024-12-07 04:22:05.014569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.962 [2024-12-07 04:22:05.056066] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:01.962 [2024-12-07 04:22:05.056126] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:01.962 [2024-12-07 04:22:05.056156] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.962 [2024-12-07 04:22:05.111488] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:01.962 04:22:05 -- common/autotest_common.sh@653 -- # es=216 00:07:02.220 04:22:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.220 04:22:05 -- common/autotest_common.sh@662 -- # es=88 00:07:02.220 04:22:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.220 04:22:05 -- common/autotest_common.sh@670 -- # es=1 00:07:02.220 04:22:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.220 04:22:05 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:02.220 04:22:05 -- dd/common.sh@98 -- # xtrace_disable 00:07:02.220 04:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.220 04:22:05 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.220 [2024-12-07 04:22:05.264960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.220 [2024-12-07 04:22:05.265057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58163 ] 00:07:02.220 [2024-12-07 04:22:05.397460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.220 [2024-12-07 04:22:05.447513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.478  [2024-12-07T04:22:05.718Z] Copying: 512/512 [B] (average 500 kBps) 00:07:02.478 00:07:02.478 04:22:05 -- dd/posix.sh@49 -- # [[ uoixe8wb0cl33ptcsty9iydqibpupqvlw17pjuzcj42zzf0djmrcy3epvtd05s6woq283v906s92fzw474nmlllzs8pdi3dxmurwxxzygvb9jmzyhx8c3acml5c8ccz397po2wo7047bzmjuo907rcm4shglbp6np7uev6cmr8u23sgvex1i81tntu190thz9f66xfqwalaxzwiyi0o25ucwns5n6bvxq1sggtqvv04kb8w4gpvpdherlpibwzfhb5bgu4xbg10naq81ks36bmy1473rxgka7ricijdorc88yryn7nrmbu7zcketos5nx3dalgn3dmxtc0jflza6x270xnr9tdd1mm8q5mu661953b1a0jrtkm8p7l4mipfwgp50d1o9ip4gg5ft6422qo7bwtb6sfo5dfuxjzwes7dm6yn2jcckqt1d39jiz1hhg1qny5rzjlfwqc8lh3yd2msp3ufl63egkqroip25nghsuigwu7edvg4py4al53og == \u\o\i\x\e\8\w\b\0\c\l\3\3\p\t\c\s\t\y\9\i\y\d\q\i\b\p\u\p\q\v\l\w\1\7\p\j\u\z\c\j\4\2\z\z\f\0\d\j\m\r\c\y\3\e\p\v\t\d\0\5\s\6\w\o\q\2\8\3\v\9\0\6\s\9\2\f\z\w\4\7\4\n\m\l\l\l\z\s\8\p\d\i\3\d\x\m\u\r\w\x\x\z\y\g\v\b\9\j\m\z\y\h\x\8\c\3\a\c\m\l\5\c\8\c\c\z\3\9\7\p\o\2\w\o\7\0\4\7\b\z\m\j\u\o\9\0\7\r\c\m\4\s\h\g\l\b\p\6\n\p\7\u\e\v\6\c\m\r\8\u\2\3\s\g\v\e\x\1\i\8\1\t\n\t\u\1\9\0\t\h\z\9\f\6\6\x\f\q\w\a\l\a\x\z\w\i\y\i\0\o\2\5\u\c\w\n\s\5\n\6\b\v\x\q\1\s\g\g\t\q\v\v\0\4\k\b\8\w\4\g\p\v\p\d\h\e\r\l\p\i\b\w\z\f\h\b\5\b\g\u\4\x\b\g\1\0\n\a\q\8\1\k\s\3\6\b\m\y\1\4\7\3\r\x\g\k\a\7\r\i\c\i\j\d\o\r\c\8\8\y\r\y\n\7\n\r\m\b\u\7\z\c\k\e\t\o\s\5\n\x\3\d\a\l\g\n\3\d\m\x\t\c\0\j\f\l\z\a\6\x\2\7\0\x\n\r\9\t\d\d\1\m\m\8\q\5\m\u\6\6\1\9\5\3\b\1\a\0\j\r\t\k\m\8\p\7\l\4\m\i\p\f\w\g\p\5\0\d\1\o\9\i\p\4\g\g\5\f\t\6\4\2\2\q\o\7\b\w\t\b\6\s\f\o\5\d\f\u\x\j\z\w\e\s\7\d\m\6\y\n\2\j\c\c\k\q\t\1\d\3\9\j\i\z\1\h\h\g\1\q\n\y\5\r\z\j\l\f\w\q\c\8\l\h\3\y\d\2\m\s\p\3\u\f\l\6\3\e\g\k\q\r\o\i\p\2\5\n\g\h\s\u\i\g\w\u\7\e\d\v\g\4\p\y\4\a\l\5\3\o\g ]] 00:07:02.478 00:07:02.478 real 0m1.373s 00:07:02.478 user 0m0.742s 00:07:02.478 sys 0m0.292s 00:07:02.478 04:22:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.478 04:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.478 ************************************ 00:07:02.478 END TEST dd_flag_nofollow 00:07:02.478 ************************************ 00:07:02.478 04:22:05 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:02.478 04:22:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.478 04:22:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.478 04:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.737 ************************************ 00:07:02.737 START TEST dd_flag_noatime 00:07:02.737 ************************************ 00:07:02.737 04:22:05 -- common/autotest_common.sh@1114 -- # noatime 00:07:02.737 04:22:05 -- dd/posix.sh@53 -- # local atime_if 00:07:02.737 04:22:05 -- dd/posix.sh@54 -- # local atime_of 00:07:02.737 04:22:05 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:02.737 04:22:05 -- dd/common.sh@98 -- # xtrace_disable 00:07:02.737 04:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:02.737 04:22:05 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.737 04:22:05 -- dd/posix.sh@60 -- # atime_if=1733545325 00:07:02.737 04:22:05 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.737 04:22:05 -- dd/posix.sh@61 -- # atime_of=1733545325 00:07:02.737 04:22:05 -- dd/posix.sh@66 -- # sleep 1 00:07:03.674 04:22:06 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.674 [2024-12-07 04:22:06.796320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.674 [2024-12-07 04:22:06.796422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:07:03.933 [2024-12-07 04:22:06.935560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.933 [2024-12-07 04:22:07.002399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.933  [2024-12-07T04:22:07.433Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.193 00:07:04.193 04:22:07 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.193 04:22:07 -- dd/posix.sh@69 -- # (( atime_if == 1733545325 )) 00:07:04.193 04:22:07 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.193 04:22:07 -- dd/posix.sh@70 -- # (( atime_of == 1733545325 )) 00:07:04.193 04:22:07 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.193 [2024-12-07 04:22:07.283439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.193 [2024-12-07 04:22:07.283550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58217 ] 00:07:04.193 [2024-12-07 04:22:07.415464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.452 [2024-12-07 04:22:07.463316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.452  [2024-12-07T04:22:07.692Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.452 00:07:04.452 04:22:07 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.452 04:22:07 -- dd/posix.sh@73 -- # (( atime_if < 1733545327 )) 00:07:04.452 00:07:04.452 real 0m1.953s 00:07:04.452 user 0m0.508s 00:07:04.452 sys 0m0.207s 00:07:04.452 04:22:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.452 ************************************ 00:07:04.452 END TEST dd_flag_noatime 00:07:04.452 ************************************ 00:07:04.452 04:22:07 -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 04:22:07 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:04.712 04:22:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.712 04:22:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.712 04:22:07 -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 ************************************ 00:07:04.712 START TEST dd_flags_misc 00:07:04.712 ************************************ 00:07:04.712 04:22:07 -- common/autotest_common.sh@1114 -- # io 00:07:04.712 04:22:07 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:04.712 04:22:07 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:04.712 04:22:07 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:04.712 04:22:07 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:04.712 04:22:07 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:04.712 04:22:07 -- dd/common.sh@98 -- # xtrace_disable 00:07:04.712 04:22:07 -- common/autotest_common.sh@10 -- # set +x 00:07:04.712 04:22:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.712 04:22:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:04.712 [2024-12-07 04:22:07.784188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.712 [2024-12-07 04:22:07.784288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58249 ] 00:07:04.712 [2024-12-07 04:22:07.921733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.971 [2024-12-07 04:22:07.970564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.971  [2024-12-07T04:22:08.211Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.971 00:07:04.971 04:22:08 -- dd/posix.sh@93 -- # [[ 9pr60fqrtpf80ag72vlv25yxwh01hgzq4vrya2jcn8khzyep6ruz1nvcsh7clgqiw8wgmrjv7yeerrb9yupfsr4a40eowd9ryghfokca7c22xeylyhcnocjlxlnc4res7hin2q1t5e2zx3jqioghoo8oh3sj854jdgpgwj6x21im1p8vl5exbhb67w3d9ofw87tvelut4rn3ujju6j9jwsh5ts1z87evyp3jd2x04o6coxydric86psq0y60xbqdmrqbcpakw23wsivg4fyqlcbjqc8k1n39qwxh47aisz6b3d9t7793j5gpv673knyrqpnzwkw03fxuae4rtxgjs9lo54ah8o0te3z3mj04dszbuv6lsb1muo2o9dob8mfzp6v00l8r24m44aqmx2gouzh8aiiph3y34vi60oy6xgcss99mc18fydzw311qpisskurks4typhg5urlh7wh3p9pptyowbaol13wm36z7l3xlw9tt12fmrrqe8ew62sho == \9\p\r\6\0\f\q\r\t\p\f\8\0\a\g\7\2\v\l\v\2\5\y\x\w\h\0\1\h\g\z\q\4\v\r\y\a\2\j\c\n\8\k\h\z\y\e\p\6\r\u\z\1\n\v\c\s\h\7\c\l\g\q\i\w\8\w\g\m\r\j\v\7\y\e\e\r\r\b\9\y\u\p\f\s\r\4\a\4\0\e\o\w\d\9\r\y\g\h\f\o\k\c\a\7\c\2\2\x\e\y\l\y\h\c\n\o\c\j\l\x\l\n\c\4\r\e\s\7\h\i\n\2\q\1\t\5\e\2\z\x\3\j\q\i\o\g\h\o\o\8\o\h\3\s\j\8\5\4\j\d\g\p\g\w\j\6\x\2\1\i\m\1\p\8\v\l\5\e\x\b\h\b\6\7\w\3\d\9\o\f\w\8\7\t\v\e\l\u\t\4\r\n\3\u\j\j\u\6\j\9\j\w\s\h\5\t\s\1\z\8\7\e\v\y\p\3\j\d\2\x\0\4\o\6\c\o\x\y\d\r\i\c\8\6\p\s\q\0\y\6\0\x\b\q\d\m\r\q\b\c\p\a\k\w\2\3\w\s\i\v\g\4\f\y\q\l\c\b\j\q\c\8\k\1\n\3\9\q\w\x\h\4\7\a\i\s\z\6\b\3\d\9\t\7\7\9\3\j\5\g\p\v\6\7\3\k\n\y\r\q\p\n\z\w\k\w\0\3\f\x\u\a\e\4\r\t\x\g\j\s\9\l\o\5\4\a\h\8\o\0\t\e\3\z\3\m\j\0\4\d\s\z\b\u\v\6\l\s\b\1\m\u\o\2\o\9\d\o\b\8\m\f\z\p\6\v\0\0\l\8\r\2\4\m\4\4\a\q\m\x\2\g\o\u\z\h\8\a\i\i\p\h\3\y\3\4\v\i\6\0\o\y\6\x\g\c\s\s\9\9\m\c\1\8\f\y\d\z\w\3\1\1\q\p\i\s\s\k\u\r\k\s\4\t\y\p\h\g\5\u\r\l\h\7\w\h\3\p\9\p\p\t\y\o\w\b\a\o\l\1\3\w\m\3\6\z\7\l\3\x\l\w\9\t\t\1\2\f\m\r\r\q\e\8\e\w\6\2\s\h\o ]] 00:07:04.971 04:22:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.971 04:22:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:05.230 [2024-12-07 04:22:08.230740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.230 [2024-12-07 04:22:08.230841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58251 ] 00:07:05.230 [2024-12-07 04:22:08.365923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.230 [2024-12-07 04:22:08.420867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.230  [2024-12-07T04:22:08.730Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.490 00:07:05.490 04:22:08 -- dd/posix.sh@93 -- # [[ 9pr60fqrtpf80ag72vlv25yxwh01hgzq4vrya2jcn8khzyep6ruz1nvcsh7clgqiw8wgmrjv7yeerrb9yupfsr4a40eowd9ryghfokca7c22xeylyhcnocjlxlnc4res7hin2q1t5e2zx3jqioghoo8oh3sj854jdgpgwj6x21im1p8vl5exbhb67w3d9ofw87tvelut4rn3ujju6j9jwsh5ts1z87evyp3jd2x04o6coxydric86psq0y60xbqdmrqbcpakw23wsivg4fyqlcbjqc8k1n39qwxh47aisz6b3d9t7793j5gpv673knyrqpnzwkw03fxuae4rtxgjs9lo54ah8o0te3z3mj04dszbuv6lsb1muo2o9dob8mfzp6v00l8r24m44aqmx2gouzh8aiiph3y34vi60oy6xgcss99mc18fydzw311qpisskurks4typhg5urlh7wh3p9pptyowbaol13wm36z7l3xlw9tt12fmrrqe8ew62sho == \9\p\r\6\0\f\q\r\t\p\f\8\0\a\g\7\2\v\l\v\2\5\y\x\w\h\0\1\h\g\z\q\4\v\r\y\a\2\j\c\n\8\k\h\z\y\e\p\6\r\u\z\1\n\v\c\s\h\7\c\l\g\q\i\w\8\w\g\m\r\j\v\7\y\e\e\r\r\b\9\y\u\p\f\s\r\4\a\4\0\e\o\w\d\9\r\y\g\h\f\o\k\c\a\7\c\2\2\x\e\y\l\y\h\c\n\o\c\j\l\x\l\n\c\4\r\e\s\7\h\i\n\2\q\1\t\5\e\2\z\x\3\j\q\i\o\g\h\o\o\8\o\h\3\s\j\8\5\4\j\d\g\p\g\w\j\6\x\2\1\i\m\1\p\8\v\l\5\e\x\b\h\b\6\7\w\3\d\9\o\f\w\8\7\t\v\e\l\u\t\4\r\n\3\u\j\j\u\6\j\9\j\w\s\h\5\t\s\1\z\8\7\e\v\y\p\3\j\d\2\x\0\4\o\6\c\o\x\y\d\r\i\c\8\6\p\s\q\0\y\6\0\x\b\q\d\m\r\q\b\c\p\a\k\w\2\3\w\s\i\v\g\4\f\y\q\l\c\b\j\q\c\8\k\1\n\3\9\q\w\x\h\4\7\a\i\s\z\6\b\3\d\9\t\7\7\9\3\j\5\g\p\v\6\7\3\k\n\y\r\q\p\n\z\w\k\w\0\3\f\x\u\a\e\4\r\t\x\g\j\s\9\l\o\5\4\a\h\8\o\0\t\e\3\z\3\m\j\0\4\d\s\z\b\u\v\6\l\s\b\1\m\u\o\2\o\9\d\o\b\8\m\f\z\p\6\v\0\0\l\8\r\2\4\m\4\4\a\q\m\x\2\g\o\u\z\h\8\a\i\i\p\h\3\y\3\4\v\i\6\0\o\y\6\x\g\c\s\s\9\9\m\c\1\8\f\y\d\z\w\3\1\1\q\p\i\s\s\k\u\r\k\s\4\t\y\p\h\g\5\u\r\l\h\7\w\h\3\p\9\p\p\t\y\o\w\b\a\o\l\1\3\w\m\3\6\z\7\l\3\x\l\w\9\t\t\1\2\f\m\r\r\q\e\8\e\w\6\2\s\h\o ]] 00:07:05.490 04:22:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.490 04:22:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:05.490 [2024-12-07 04:22:08.672936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.490 [2024-12-07 04:22:08.673066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58259 ] 00:07:05.750 [2024-12-07 04:22:08.808363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.750 [2024-12-07 04:22:08.854566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.750  [2024-12-07T04:22:09.250Z] Copying: 512/512 [B] (average 500 kBps) 00:07:06.010 00:07:06.010 04:22:09 -- dd/posix.sh@93 -- # [[ 9pr60fqrtpf80ag72vlv25yxwh01hgzq4vrya2jcn8khzyep6ruz1nvcsh7clgqiw8wgmrjv7yeerrb9yupfsr4a40eowd9ryghfokca7c22xeylyhcnocjlxlnc4res7hin2q1t5e2zx3jqioghoo8oh3sj854jdgpgwj6x21im1p8vl5exbhb67w3d9ofw87tvelut4rn3ujju6j9jwsh5ts1z87evyp3jd2x04o6coxydric86psq0y60xbqdmrqbcpakw23wsivg4fyqlcbjqc8k1n39qwxh47aisz6b3d9t7793j5gpv673knyrqpnzwkw03fxuae4rtxgjs9lo54ah8o0te3z3mj04dszbuv6lsb1muo2o9dob8mfzp6v00l8r24m44aqmx2gouzh8aiiph3y34vi60oy6xgcss99mc18fydzw311qpisskurks4typhg5urlh7wh3p9pptyowbaol13wm36z7l3xlw9tt12fmrrqe8ew62sho == \9\p\r\6\0\f\q\r\t\p\f\8\0\a\g\7\2\v\l\v\2\5\y\x\w\h\0\1\h\g\z\q\4\v\r\y\a\2\j\c\n\8\k\h\z\y\e\p\6\r\u\z\1\n\v\c\s\h\7\c\l\g\q\i\w\8\w\g\m\r\j\v\7\y\e\e\r\r\b\9\y\u\p\f\s\r\4\a\4\0\e\o\w\d\9\r\y\g\h\f\o\k\c\a\7\c\2\2\x\e\y\l\y\h\c\n\o\c\j\l\x\l\n\c\4\r\e\s\7\h\i\n\2\q\1\t\5\e\2\z\x\3\j\q\i\o\g\h\o\o\8\o\h\3\s\j\8\5\4\j\d\g\p\g\w\j\6\x\2\1\i\m\1\p\8\v\l\5\e\x\b\h\b\6\7\w\3\d\9\o\f\w\8\7\t\v\e\l\u\t\4\r\n\3\u\j\j\u\6\j\9\j\w\s\h\5\t\s\1\z\8\7\e\v\y\p\3\j\d\2\x\0\4\o\6\c\o\x\y\d\r\i\c\8\6\p\s\q\0\y\6\0\x\b\q\d\m\r\q\b\c\p\a\k\w\2\3\w\s\i\v\g\4\f\y\q\l\c\b\j\q\c\8\k\1\n\3\9\q\w\x\h\4\7\a\i\s\z\6\b\3\d\9\t\7\7\9\3\j\5\g\p\v\6\7\3\k\n\y\r\q\p\n\z\w\k\w\0\3\f\x\u\a\e\4\r\t\x\g\j\s\9\l\o\5\4\a\h\8\o\0\t\e\3\z\3\m\j\0\4\d\s\z\b\u\v\6\l\s\b\1\m\u\o\2\o\9\d\o\b\8\m\f\z\p\6\v\0\0\l\8\r\2\4\m\4\4\a\q\m\x\2\g\o\u\z\h\8\a\i\i\p\h\3\y\3\4\v\i\6\0\o\y\6\x\g\c\s\s\9\9\m\c\1\8\f\y\d\z\w\3\1\1\q\p\i\s\s\k\u\r\k\s\4\t\y\p\h\g\5\u\r\l\h\7\w\h\3\p\9\p\p\t\y\o\w\b\a\o\l\1\3\w\m\3\6\z\7\l\3\x\l\w\9\t\t\1\2\f\m\r\r\q\e\8\e\w\6\2\s\h\o ]] 00:07:06.010 04:22:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.010 04:22:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:06.010 [2024-12-07 04:22:09.124772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.010 [2024-12-07 04:22:09.124883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58266 ] 00:07:06.270 [2024-12-07 04:22:09.260626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.270 [2024-12-07 04:22:09.307320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.270  [2024-12-07T04:22:09.770Z] Copying: 512/512 [B] (average 250 kBps) 00:07:06.530 00:07:06.530 04:22:09 -- dd/posix.sh@93 -- # [[ 9pr60fqrtpf80ag72vlv25yxwh01hgzq4vrya2jcn8khzyep6ruz1nvcsh7clgqiw8wgmrjv7yeerrb9yupfsr4a40eowd9ryghfokca7c22xeylyhcnocjlxlnc4res7hin2q1t5e2zx3jqioghoo8oh3sj854jdgpgwj6x21im1p8vl5exbhb67w3d9ofw87tvelut4rn3ujju6j9jwsh5ts1z87evyp3jd2x04o6coxydric86psq0y60xbqdmrqbcpakw23wsivg4fyqlcbjqc8k1n39qwxh47aisz6b3d9t7793j5gpv673knyrqpnzwkw03fxuae4rtxgjs9lo54ah8o0te3z3mj04dszbuv6lsb1muo2o9dob8mfzp6v00l8r24m44aqmx2gouzh8aiiph3y34vi60oy6xgcss99mc18fydzw311qpisskurks4typhg5urlh7wh3p9pptyowbaol13wm36z7l3xlw9tt12fmrrqe8ew62sho == \9\p\r\6\0\f\q\r\t\p\f\8\0\a\g\7\2\v\l\v\2\5\y\x\w\h\0\1\h\g\z\q\4\v\r\y\a\2\j\c\n\8\k\h\z\y\e\p\6\r\u\z\1\n\v\c\s\h\7\c\l\g\q\i\w\8\w\g\m\r\j\v\7\y\e\e\r\r\b\9\y\u\p\f\s\r\4\a\4\0\e\o\w\d\9\r\y\g\h\f\o\k\c\a\7\c\2\2\x\e\y\l\y\h\c\n\o\c\j\l\x\l\n\c\4\r\e\s\7\h\i\n\2\q\1\t\5\e\2\z\x\3\j\q\i\o\g\h\o\o\8\o\h\3\s\j\8\5\4\j\d\g\p\g\w\j\6\x\2\1\i\m\1\p\8\v\l\5\e\x\b\h\b\6\7\w\3\d\9\o\f\w\8\7\t\v\e\l\u\t\4\r\n\3\u\j\j\u\6\j\9\j\w\s\h\5\t\s\1\z\8\7\e\v\y\p\3\j\d\2\x\0\4\o\6\c\o\x\y\d\r\i\c\8\6\p\s\q\0\y\6\0\x\b\q\d\m\r\q\b\c\p\a\k\w\2\3\w\s\i\v\g\4\f\y\q\l\c\b\j\q\c\8\k\1\n\3\9\q\w\x\h\4\7\a\i\s\z\6\b\3\d\9\t\7\7\9\3\j\5\g\p\v\6\7\3\k\n\y\r\q\p\n\z\w\k\w\0\3\f\x\u\a\e\4\r\t\x\g\j\s\9\l\o\5\4\a\h\8\o\0\t\e\3\z\3\m\j\0\4\d\s\z\b\u\v\6\l\s\b\1\m\u\o\2\o\9\d\o\b\8\m\f\z\p\6\v\0\0\l\8\r\2\4\m\4\4\a\q\m\x\2\g\o\u\z\h\8\a\i\i\p\h\3\y\3\4\v\i\6\0\o\y\6\x\g\c\s\s\9\9\m\c\1\8\f\y\d\z\w\3\1\1\q\p\i\s\s\k\u\r\k\s\4\t\y\p\h\g\5\u\r\l\h\7\w\h\3\p\9\p\p\t\y\o\w\b\a\o\l\1\3\w\m\3\6\z\7\l\3\x\l\w\9\t\t\1\2\f\m\r\r\q\e\8\e\w\6\2\s\h\o ]] 00:07:06.530 04:22:09 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:06.530 04:22:09 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:06.530 04:22:09 -- dd/common.sh@98 -- # xtrace_disable 00:07:06.530 04:22:09 -- common/autotest_common.sh@10 -- # set +x 00:07:06.530 04:22:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.530 04:22:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:06.530 [2024-12-07 04:22:09.587365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.530 [2024-12-07 04:22:09.587473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58274 ] 00:07:06.530 [2024-12-07 04:22:09.723632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.790 [2024-12-07 04:22:09.773820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.790  [2024-12-07T04:22:10.030Z] Copying: 512/512 [B] (average 500 kBps) 00:07:06.790 00:07:06.790 04:22:09 -- dd/posix.sh@93 -- # [[ klfgtk29521ucndnlewizoaubppms39h98zy552k39k6wrzisldkrz6oift3rzl2xayomqosbowtgf1ka3i4hx00gw4b8o8tbno3fpsik36ivhc1zu8pbxerbid85dcmi9connofe2xdidngvb00khdv9bkbk33lvb59ywm7qniz3vkuineax03bj9xhsytdpqw2qp4or9jt9ns8go887bf05nnloa35ett70bm6uyxxwzyib4wonyf5fq8njr4mycwjwfexnoe5a83gvvtenvnqp5n5g8ma1kiaw94fabdtmtqkdjkghw2duzz2trrp1yf7wkr49s2ojyvhe23vdqo5c8stlk1gvf2uzm27v2ff11vt0v6hjvshbv7bie7ulpjl0vd1jzyv81hzw1w0yf481o7mkuiym2mai46cjjmwd4a7ts1uua09ew9eocqh4740bi5nc0itprj24tmelhdorgs9egi5zak4lx2q0hwt4qyaar1fi413vr1q8yui == \k\l\f\g\t\k\2\9\5\2\1\u\c\n\d\n\l\e\w\i\z\o\a\u\b\p\p\m\s\3\9\h\9\8\z\y\5\5\2\k\3\9\k\6\w\r\z\i\s\l\d\k\r\z\6\o\i\f\t\3\r\z\l\2\x\a\y\o\m\q\o\s\b\o\w\t\g\f\1\k\a\3\i\4\h\x\0\0\g\w\4\b\8\o\8\t\b\n\o\3\f\p\s\i\k\3\6\i\v\h\c\1\z\u\8\p\b\x\e\r\b\i\d\8\5\d\c\m\i\9\c\o\n\n\o\f\e\2\x\d\i\d\n\g\v\b\0\0\k\h\d\v\9\b\k\b\k\3\3\l\v\b\5\9\y\w\m\7\q\n\i\z\3\v\k\u\i\n\e\a\x\0\3\b\j\9\x\h\s\y\t\d\p\q\w\2\q\p\4\o\r\9\j\t\9\n\s\8\g\o\8\8\7\b\f\0\5\n\n\l\o\a\3\5\e\t\t\7\0\b\m\6\u\y\x\x\w\z\y\i\b\4\w\o\n\y\f\5\f\q\8\n\j\r\4\m\y\c\w\j\w\f\e\x\n\o\e\5\a\8\3\g\v\v\t\e\n\v\n\q\p\5\n\5\g\8\m\a\1\k\i\a\w\9\4\f\a\b\d\t\m\t\q\k\d\j\k\g\h\w\2\d\u\z\z\2\t\r\r\p\1\y\f\7\w\k\r\4\9\s\2\o\j\y\v\h\e\2\3\v\d\q\o\5\c\8\s\t\l\k\1\g\v\f\2\u\z\m\2\7\v\2\f\f\1\1\v\t\0\v\6\h\j\v\s\h\b\v\7\b\i\e\7\u\l\p\j\l\0\v\d\1\j\z\y\v\8\1\h\z\w\1\w\0\y\f\4\8\1\o\7\m\k\u\i\y\m\2\m\a\i\4\6\c\j\j\m\w\d\4\a\7\t\s\1\u\u\a\0\9\e\w\9\e\o\c\q\h\4\7\4\0\b\i\5\n\c\0\i\t\p\r\j\2\4\t\m\e\l\h\d\o\r\g\s\9\e\g\i\5\z\a\k\4\l\x\2\q\0\h\w\t\4\q\y\a\a\r\1\f\i\4\1\3\v\r\1\q\8\y\u\i ]] 00:07:06.790 04:22:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.790 04:22:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:06.790 [2024-12-07 04:22:10.028312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.790 [2024-12-07 04:22:10.028418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58281 ] 00:07:07.049 [2024-12-07 04:22:10.162049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.049 [2024-12-07 04:22:10.209920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.049  [2024-12-07T04:22:10.549Z] Copying: 512/512 [B] (average 500 kBps) 00:07:07.309 00:07:07.309 04:22:10 -- dd/posix.sh@93 -- # [[ klfgtk29521ucndnlewizoaubppms39h98zy552k39k6wrzisldkrz6oift3rzl2xayomqosbowtgf1ka3i4hx00gw4b8o8tbno3fpsik36ivhc1zu8pbxerbid85dcmi9connofe2xdidngvb00khdv9bkbk33lvb59ywm7qniz3vkuineax03bj9xhsytdpqw2qp4or9jt9ns8go887bf05nnloa35ett70bm6uyxxwzyib4wonyf5fq8njr4mycwjwfexnoe5a83gvvtenvnqp5n5g8ma1kiaw94fabdtmtqkdjkghw2duzz2trrp1yf7wkr49s2ojyvhe23vdqo5c8stlk1gvf2uzm27v2ff11vt0v6hjvshbv7bie7ulpjl0vd1jzyv81hzw1w0yf481o7mkuiym2mai46cjjmwd4a7ts1uua09ew9eocqh4740bi5nc0itprj24tmelhdorgs9egi5zak4lx2q0hwt4qyaar1fi413vr1q8yui == \k\l\f\g\t\k\2\9\5\2\1\u\c\n\d\n\l\e\w\i\z\o\a\u\b\p\p\m\s\3\9\h\9\8\z\y\5\5\2\k\3\9\k\6\w\r\z\i\s\l\d\k\r\z\6\o\i\f\t\3\r\z\l\2\x\a\y\o\m\q\o\s\b\o\w\t\g\f\1\k\a\3\i\4\h\x\0\0\g\w\4\b\8\o\8\t\b\n\o\3\f\p\s\i\k\3\6\i\v\h\c\1\z\u\8\p\b\x\e\r\b\i\d\8\5\d\c\m\i\9\c\o\n\n\o\f\e\2\x\d\i\d\n\g\v\b\0\0\k\h\d\v\9\b\k\b\k\3\3\l\v\b\5\9\y\w\m\7\q\n\i\z\3\v\k\u\i\n\e\a\x\0\3\b\j\9\x\h\s\y\t\d\p\q\w\2\q\p\4\o\r\9\j\t\9\n\s\8\g\o\8\8\7\b\f\0\5\n\n\l\o\a\3\5\e\t\t\7\0\b\m\6\u\y\x\x\w\z\y\i\b\4\w\o\n\y\f\5\f\q\8\n\j\r\4\m\y\c\w\j\w\f\e\x\n\o\e\5\a\8\3\g\v\v\t\e\n\v\n\q\p\5\n\5\g\8\m\a\1\k\i\a\w\9\4\f\a\b\d\t\m\t\q\k\d\j\k\g\h\w\2\d\u\z\z\2\t\r\r\p\1\y\f\7\w\k\r\4\9\s\2\o\j\y\v\h\e\2\3\v\d\q\o\5\c\8\s\t\l\k\1\g\v\f\2\u\z\m\2\7\v\2\f\f\1\1\v\t\0\v\6\h\j\v\s\h\b\v\7\b\i\e\7\u\l\p\j\l\0\v\d\1\j\z\y\v\8\1\h\z\w\1\w\0\y\f\4\8\1\o\7\m\k\u\i\y\m\2\m\a\i\4\6\c\j\j\m\w\d\4\a\7\t\s\1\u\u\a\0\9\e\w\9\e\o\c\q\h\4\7\4\0\b\i\5\n\c\0\i\t\p\r\j\2\4\t\m\e\l\h\d\o\r\g\s\9\e\g\i\5\z\a\k\4\l\x\2\q\0\h\w\t\4\q\y\a\a\r\1\f\i\4\1\3\v\r\1\q\8\y\u\i ]] 00:07:07.309 04:22:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.309 04:22:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:07.309 [2024-12-07 04:22:10.463730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.309 [2024-12-07 04:22:10.463825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58283 ] 00:07:07.568 [2024-12-07 04:22:10.599001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.568 [2024-12-07 04:22:10.647538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.568  [2024-12-07T04:22:11.068Z] Copying: 512/512 [B] (average 500 kBps) 00:07:07.828 00:07:07.828 04:22:10 -- dd/posix.sh@93 -- # [[ klfgtk29521ucndnlewizoaubppms39h98zy552k39k6wrzisldkrz6oift3rzl2xayomqosbowtgf1ka3i4hx00gw4b8o8tbno3fpsik36ivhc1zu8pbxerbid85dcmi9connofe2xdidngvb00khdv9bkbk33lvb59ywm7qniz3vkuineax03bj9xhsytdpqw2qp4or9jt9ns8go887bf05nnloa35ett70bm6uyxxwzyib4wonyf5fq8njr4mycwjwfexnoe5a83gvvtenvnqp5n5g8ma1kiaw94fabdtmtqkdjkghw2duzz2trrp1yf7wkr49s2ojyvhe23vdqo5c8stlk1gvf2uzm27v2ff11vt0v6hjvshbv7bie7ulpjl0vd1jzyv81hzw1w0yf481o7mkuiym2mai46cjjmwd4a7ts1uua09ew9eocqh4740bi5nc0itprj24tmelhdorgs9egi5zak4lx2q0hwt4qyaar1fi413vr1q8yui == \k\l\f\g\t\k\2\9\5\2\1\u\c\n\d\n\l\e\w\i\z\o\a\u\b\p\p\m\s\3\9\h\9\8\z\y\5\5\2\k\3\9\k\6\w\r\z\i\s\l\d\k\r\z\6\o\i\f\t\3\r\z\l\2\x\a\y\o\m\q\o\s\b\o\w\t\g\f\1\k\a\3\i\4\h\x\0\0\g\w\4\b\8\o\8\t\b\n\o\3\f\p\s\i\k\3\6\i\v\h\c\1\z\u\8\p\b\x\e\r\b\i\d\8\5\d\c\m\i\9\c\o\n\n\o\f\e\2\x\d\i\d\n\g\v\b\0\0\k\h\d\v\9\b\k\b\k\3\3\l\v\b\5\9\y\w\m\7\q\n\i\z\3\v\k\u\i\n\e\a\x\0\3\b\j\9\x\h\s\y\t\d\p\q\w\2\q\p\4\o\r\9\j\t\9\n\s\8\g\o\8\8\7\b\f\0\5\n\n\l\o\a\3\5\e\t\t\7\0\b\m\6\u\y\x\x\w\z\y\i\b\4\w\o\n\y\f\5\f\q\8\n\j\r\4\m\y\c\w\j\w\f\e\x\n\o\e\5\a\8\3\g\v\v\t\e\n\v\n\q\p\5\n\5\g\8\m\a\1\k\i\a\w\9\4\f\a\b\d\t\m\t\q\k\d\j\k\g\h\w\2\d\u\z\z\2\t\r\r\p\1\y\f\7\w\k\r\4\9\s\2\o\j\y\v\h\e\2\3\v\d\q\o\5\c\8\s\t\l\k\1\g\v\f\2\u\z\m\2\7\v\2\f\f\1\1\v\t\0\v\6\h\j\v\s\h\b\v\7\b\i\e\7\u\l\p\j\l\0\v\d\1\j\z\y\v\8\1\h\z\w\1\w\0\y\f\4\8\1\o\7\m\k\u\i\y\m\2\m\a\i\4\6\c\j\j\m\w\d\4\a\7\t\s\1\u\u\a\0\9\e\w\9\e\o\c\q\h\4\7\4\0\b\i\5\n\c\0\i\t\p\r\j\2\4\t\m\e\l\h\d\o\r\g\s\9\e\g\i\5\z\a\k\4\l\x\2\q\0\h\w\t\4\q\y\a\a\r\1\f\i\4\1\3\v\r\1\q\8\y\u\i ]] 00:07:07.828 04:22:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.828 04:22:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:07.828 [2024-12-07 04:22:10.903197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.828 [2024-12-07 04:22:10.903297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58295 ] 00:07:07.828 [2024-12-07 04:22:11.039928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.089 [2024-12-07 04:22:11.088236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.089  [2024-12-07T04:22:11.329Z] Copying: 512/512 [B] (average 500 kBps) 00:07:08.089 00:07:08.089 04:22:11 -- dd/posix.sh@93 -- # [[ klfgtk29521ucndnlewizoaubppms39h98zy552k39k6wrzisldkrz6oift3rzl2xayomqosbowtgf1ka3i4hx00gw4b8o8tbno3fpsik36ivhc1zu8pbxerbid85dcmi9connofe2xdidngvb00khdv9bkbk33lvb59ywm7qniz3vkuineax03bj9xhsytdpqw2qp4or9jt9ns8go887bf05nnloa35ett70bm6uyxxwzyib4wonyf5fq8njr4mycwjwfexnoe5a83gvvtenvnqp5n5g8ma1kiaw94fabdtmtqkdjkghw2duzz2trrp1yf7wkr49s2ojyvhe23vdqo5c8stlk1gvf2uzm27v2ff11vt0v6hjvshbv7bie7ulpjl0vd1jzyv81hzw1w0yf481o7mkuiym2mai46cjjmwd4a7ts1uua09ew9eocqh4740bi5nc0itprj24tmelhdorgs9egi5zak4lx2q0hwt4qyaar1fi413vr1q8yui == \k\l\f\g\t\k\2\9\5\2\1\u\c\n\d\n\l\e\w\i\z\o\a\u\b\p\p\m\s\3\9\h\9\8\z\y\5\5\2\k\3\9\k\6\w\r\z\i\s\l\d\k\r\z\6\o\i\f\t\3\r\z\l\2\x\a\y\o\m\q\o\s\b\o\w\t\g\f\1\k\a\3\i\4\h\x\0\0\g\w\4\b\8\o\8\t\b\n\o\3\f\p\s\i\k\3\6\i\v\h\c\1\z\u\8\p\b\x\e\r\b\i\d\8\5\d\c\m\i\9\c\o\n\n\o\f\e\2\x\d\i\d\n\g\v\b\0\0\k\h\d\v\9\b\k\b\k\3\3\l\v\b\5\9\y\w\m\7\q\n\i\z\3\v\k\u\i\n\e\a\x\0\3\b\j\9\x\h\s\y\t\d\p\q\w\2\q\p\4\o\r\9\j\t\9\n\s\8\g\o\8\8\7\b\f\0\5\n\n\l\o\a\3\5\e\t\t\7\0\b\m\6\u\y\x\x\w\z\y\i\b\4\w\o\n\y\f\5\f\q\8\n\j\r\4\m\y\c\w\j\w\f\e\x\n\o\e\5\a\8\3\g\v\v\t\e\n\v\n\q\p\5\n\5\g\8\m\a\1\k\i\a\w\9\4\f\a\b\d\t\m\t\q\k\d\j\k\g\h\w\2\d\u\z\z\2\t\r\r\p\1\y\f\7\w\k\r\4\9\s\2\o\j\y\v\h\e\2\3\v\d\q\o\5\c\8\s\t\l\k\1\g\v\f\2\u\z\m\2\7\v\2\f\f\1\1\v\t\0\v\6\h\j\v\s\h\b\v\7\b\i\e\7\u\l\p\j\l\0\v\d\1\j\z\y\v\8\1\h\z\w\1\w\0\y\f\4\8\1\o\7\m\k\u\i\y\m\2\m\a\i\4\6\c\j\j\m\w\d\4\a\7\t\s\1\u\u\a\0\9\e\w\9\e\o\c\q\h\4\7\4\0\b\i\5\n\c\0\i\t\p\r\j\2\4\t\m\e\l\h\d\o\r\g\s\9\e\g\i\5\z\a\k\4\l\x\2\q\0\h\w\t\4\q\y\a\a\r\1\f\i\4\1\3\v\r\1\q\8\y\u\i ]] 00:07:08.089 00:07:08.089 real 0m3.567s 00:07:08.089 user 0m1.883s 00:07:08.089 sys 0m0.721s 00:07:08.089 04:22:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.089 04:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.089 ************************************ 00:07:08.089 END TEST dd_flags_misc 00:07:08.089 ************************************ 00:07:08.349 04:22:11 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:08.349 04:22:11 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:08.349 * Second test run, disabling liburing, forcing AIO 00:07:08.349 04:22:11 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:08.349 04:22:11 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:08.349 04:22:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.349 04:22:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.350 04:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.350 ************************************ 00:07:08.350 START TEST dd_flag_append_forced_aio 00:07:08.350 ************************************ 00:07:08.350 04:22:11 -- common/autotest_common.sh@1114 -- # append 00:07:08.350 04:22:11 -- dd/posix.sh@16 -- # local dump0 00:07:08.350 04:22:11 -- dd/posix.sh@17 -- # local dump1 00:07:08.350 04:22:11 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:08.350 04:22:11 -- dd/common.sh@98 -- # xtrace_disable 00:07:08.350 04:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.350 04:22:11 -- dd/posix.sh@19 -- # dump0=r09i8zer5ps4if0x10h60idj6wlw7mxf 00:07:08.350 04:22:11 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:08.350 04:22:11 -- dd/common.sh@98 -- # xtrace_disable 00:07:08.350 04:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.350 04:22:11 -- dd/posix.sh@20 -- # dump1=b6lyngaxnev14hvt6wbljawzwqpoyxrg 00:07:08.350 04:22:11 -- dd/posix.sh@22 -- # printf %s r09i8zer5ps4if0x10h60idj6wlw7mxf 00:07:08.350 04:22:11 -- dd/posix.sh@23 -- # printf %s b6lyngaxnev14hvt6wbljawzwqpoyxrg 00:07:08.350 04:22:11 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:08.350 [2024-12-07 04:22:11.388258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.350 [2024-12-07 04:22:11.388356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58317 ] 00:07:08.350 [2024-12-07 04:22:11.515080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.350 [2024-12-07 04:22:11.563422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.612  [2024-12-07T04:22:11.852Z] Copying: 32/32 [B] (average 31 kBps) 00:07:08.612 00:07:08.612 04:22:11 -- dd/posix.sh@27 -- # [[ b6lyngaxnev14hvt6wbljawzwqpoyxrgr09i8zer5ps4if0x10h60idj6wlw7mxf == \b\6\l\y\n\g\a\x\n\e\v\1\4\h\v\t\6\w\b\l\j\a\w\z\w\q\p\o\y\x\r\g\r\0\9\i\8\z\e\r\5\p\s\4\i\f\0\x\1\0\h\6\0\i\d\j\6\w\l\w\7\m\x\f ]] 00:07:08.612 00:07:08.612 real 0m0.420s 00:07:08.612 user 0m0.222s 00:07:08.612 sys 0m0.080s 00:07:08.612 04:22:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.612 04:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.612 ************************************ 00:07:08.612 END TEST dd_flag_append_forced_aio 00:07:08.612 ************************************ 00:07:08.612 04:22:11 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:08.612 04:22:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.612 04:22:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.612 04:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.612 ************************************ 00:07:08.612 START TEST dd_flag_directory_forced_aio 00:07:08.612 ************************************ 00:07:08.612 04:22:11 -- common/autotest_common.sh@1114 -- # directory 00:07:08.612 04:22:11 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.612 04:22:11 -- common/autotest_common.sh@650 -- # local es=0 00:07:08.612 04:22:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.612 04:22:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.612 04:22:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.612 04:22:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.613 04:22:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.613 04:22:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.613 04:22:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.613 04:22:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.613 04:22:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.613 04:22:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.873 [2024-12-07 04:22:11.853086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.873 [2024-12-07 04:22:11.853186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58344 ] 00:07:08.873 [2024-12-07 04:22:11.983263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.873 [2024-12-07 04:22:12.030753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.873 [2024-12-07 04:22:12.072209] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:08.873 [2024-12-07 04:22:12.072262] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:08.873 [2024-12-07 04:22:12.072291] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.133 [2024-12-07 04:22:12.129909] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:09.133 04:22:12 -- common/autotest_common.sh@653 -- # es=236 00:07:09.133 04:22:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.133 04:22:12 -- common/autotest_common.sh@662 -- # es=108 00:07:09.133 04:22:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:09.133 04:22:12 -- common/autotest_common.sh@670 -- # es=1 00:07:09.133 04:22:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.133 04:22:12 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:09.133 04:22:12 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.133 04:22:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:09.133 04:22:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.133 04:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.133 04:22:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.133 04:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.133 04:22:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.133 04:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.133 04:22:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.133 04:22:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.133 04:22:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:09.133 [2024-12-07 04:22:12.278222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.133 [2024-12-07 04:22:12.278333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58353 ] 00:07:09.393 [2024-12-07 04:22:12.415747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.393 [2024-12-07 04:22:12.464414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.393 [2024-12-07 04:22:12.510273] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:09.393 [2024-12-07 04:22:12.510338] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:09.393 [2024-12-07 04:22:12.510366] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.393 [2024-12-07 04:22:12.566695] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:09.652 04:22:12 -- common/autotest_common.sh@653 -- # es=236 00:07:09.652 04:22:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.652 04:22:12 -- common/autotest_common.sh@662 -- # es=108 00:07:09.652 04:22:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:09.652 04:22:12 -- common/autotest_common.sh@670 -- # es=1 00:07:09.652 04:22:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.652 00:07:09.652 real 0m0.844s 00:07:09.652 user 0m0.466s 00:07:09.652 sys 0m0.170s 00:07:09.652 ************************************ 00:07:09.652 END TEST dd_flag_directory_forced_aio 00:07:09.652 ************************************ 00:07:09.652 04:22:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.652 04:22:12 -- common/autotest_common.sh@10 -- # set +x 00:07:09.652 04:22:12 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:09.652 04:22:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.652 04:22:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.652 04:22:12 -- common/autotest_common.sh@10 -- # set +x 00:07:09.652 ************************************ 00:07:09.652 START TEST dd_flag_nofollow_forced_aio 00:07:09.652 ************************************ 00:07:09.652 04:22:12 -- common/autotest_common.sh@1114 -- # nofollow 00:07:09.652 04:22:12 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:09.652 04:22:12 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:09.652 04:22:12 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:09.652 04:22:12 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:09.652 04:22:12 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.652 04:22:12 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.652 04:22:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.652 04:22:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.652 04:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.652 04:22:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.652 04:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.652 04:22:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.652 04:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.652 04:22:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.652 04:22:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.653 04:22:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.653 [2024-12-07 04:22:12.765777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.653 [2024-12-07 04:22:12.765877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58382 ] 00:07:09.912 [2024-12-07 04:22:12.902916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.912 [2024-12-07 04:22:12.953909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.912 [2024-12-07 04:22:13.000246] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:09.912 [2024-12-07 04:22:13.000311] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:09.912 [2024-12-07 04:22:13.000340] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.912 [2024-12-07 04:22:13.056199] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:09.912 04:22:13 -- common/autotest_common.sh@653 -- # es=216 00:07:09.912 04:22:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.912 04:22:13 -- common/autotest_common.sh@662 -- # es=88 00:07:09.912 04:22:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:09.912 04:22:13 -- common/autotest_common.sh@670 -- # es=1 00:07:09.912 04:22:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.912 04:22:13 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:09.912 04:22:13 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.912 04:22:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:09.912 04:22:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.912 04:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.912 04:22:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.912 04:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.912 04:22:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.912 04:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.912 04:22:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.912 04:22:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.912 04:22:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:10.172 [2024-12-07 04:22:13.198144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.172 [2024-12-07 04:22:13.198239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58391 ] 00:07:10.172 [2024-12-07 04:22:13.335467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.172 [2024-12-07 04:22:13.381935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.431 [2024-12-07 04:22:13.427078] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:10.431 [2024-12-07 04:22:13.427144] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:10.431 [2024-12-07 04:22:13.427174] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.431 [2024-12-07 04:22:13.484139] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:10.431 04:22:13 -- common/autotest_common.sh@653 -- # es=216 00:07:10.431 04:22:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.431 04:22:13 -- common/autotest_common.sh@662 -- # es=88 00:07:10.431 04:22:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:10.431 04:22:13 -- common/autotest_common.sh@670 -- # es=1 00:07:10.431 04:22:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.431 04:22:13 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:10.431 04:22:13 -- dd/common.sh@98 -- # xtrace_disable 00:07:10.431 04:22:13 -- common/autotest_common.sh@10 -- # set +x 00:07:10.431 04:22:13 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.431 [2024-12-07 04:22:13.633779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.431 [2024-12-07 04:22:13.633872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58399 ] 00:07:10.690 [2024-12-07 04:22:13.770111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.690 [2024-12-07 04:22:13.816639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.690  [2024-12-07T04:22:14.190Z] Copying: 512/512 [B] (average 500 kBps) 00:07:10.950 00:07:10.950 04:22:14 -- dd/posix.sh@49 -- # [[ dk43xexx895oze08minxn5e4mlus8ty3y5lymsim0jsex0o6z5nipfjwtipwgp1zc4rictrbqsb3q0g7yjacri6ulhk1kngbor4n4hnrk4nr0uplj4cwgdzh893ri3tgsa7vobyupu3zsubla6kkvk7nlk3vf4e3vxdugwq104582p9xve7n6zunjjzwk0ysfdlbqng5pbilx4smslawrmgbg3p9wtpdeh41be9wzn0mdw6vezcvukq5bpko9o9n8bfu0zm9b6m9herq84zdcu6i71c8rf53ge3vosyyx5a2tv6w9dteg8cj0aq6tergc09ghu8ui50ftxevfq9lr0twjxtrlcswoommaemlr88a2m4al09t1bsso2entut8zx638szg7r685tdw56o5ucr88nr1wrcw49fbp8cmei6jaf0y4u2dwovyathtpnbwnm2frdb24vsrscb80x7sj3u24f8cczcr9ejcyn9pf0ehmet7lp12094al3bbu38g == \d\k\4\3\x\e\x\x\8\9\5\o\z\e\0\8\m\i\n\x\n\5\e\4\m\l\u\s\8\t\y\3\y\5\l\y\m\s\i\m\0\j\s\e\x\0\o\6\z\5\n\i\p\f\j\w\t\i\p\w\g\p\1\z\c\4\r\i\c\t\r\b\q\s\b\3\q\0\g\7\y\j\a\c\r\i\6\u\l\h\k\1\k\n\g\b\o\r\4\n\4\h\n\r\k\4\n\r\0\u\p\l\j\4\c\w\g\d\z\h\8\9\3\r\i\3\t\g\s\a\7\v\o\b\y\u\p\u\3\z\s\u\b\l\a\6\k\k\v\k\7\n\l\k\3\v\f\4\e\3\v\x\d\u\g\w\q\1\0\4\5\8\2\p\9\x\v\e\7\n\6\z\u\n\j\j\z\w\k\0\y\s\f\d\l\b\q\n\g\5\p\b\i\l\x\4\s\m\s\l\a\w\r\m\g\b\g\3\p\9\w\t\p\d\e\h\4\1\b\e\9\w\z\n\0\m\d\w\6\v\e\z\c\v\u\k\q\5\b\p\k\o\9\o\9\n\8\b\f\u\0\z\m\9\b\6\m\9\h\e\r\q\8\4\z\d\c\u\6\i\7\1\c\8\r\f\5\3\g\e\3\v\o\s\y\y\x\5\a\2\t\v\6\w\9\d\t\e\g\8\c\j\0\a\q\6\t\e\r\g\c\0\9\g\h\u\8\u\i\5\0\f\t\x\e\v\f\q\9\l\r\0\t\w\j\x\t\r\l\c\s\w\o\o\m\m\a\e\m\l\r\8\8\a\2\m\4\a\l\0\9\t\1\b\s\s\o\2\e\n\t\u\t\8\z\x\6\3\8\s\z\g\7\r\6\8\5\t\d\w\5\6\o\5\u\c\r\8\8\n\r\1\w\r\c\w\4\9\f\b\p\8\c\m\e\i\6\j\a\f\0\y\4\u\2\d\w\o\v\y\a\t\h\t\p\n\b\w\n\m\2\f\r\d\b\2\4\v\s\r\s\c\b\8\0\x\7\s\j\3\u\2\4\f\8\c\c\z\c\r\9\e\j\c\y\n\9\p\f\0\e\h\m\e\t\7\l\p\1\2\0\9\4\a\l\3\b\b\u\3\8\g ]] 00:07:10.950 00:07:10.950 real 0m1.324s 00:07:10.950 user 0m0.717s 00:07:10.950 sys 0m0.281s 00:07:10.950 ************************************ 00:07:10.950 END TEST dd_flag_nofollow_forced_aio 00:07:10.950 ************************************ 00:07:10.950 04:22:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.950 04:22:14 -- common/autotest_common.sh@10 -- # set +x 00:07:10.950 04:22:14 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:10.950 04:22:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.950 04:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.950 04:22:14 -- common/autotest_common.sh@10 -- # set +x 00:07:10.950 ************************************ 00:07:10.950 START TEST dd_flag_noatime_forced_aio 00:07:10.950 ************************************ 00:07:10.950 04:22:14 -- common/autotest_common.sh@1114 -- # noatime 00:07:10.950 04:22:14 -- dd/posix.sh@53 -- # local atime_if 00:07:10.950 04:22:14 -- dd/posix.sh@54 -- # local atime_of 00:07:10.950 04:22:14 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:10.950 04:22:14 -- dd/common.sh@98 -- # xtrace_disable 00:07:10.950 04:22:14 -- common/autotest_common.sh@10 -- # set +x 00:07:10.950 04:22:14 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.950 04:22:14 -- dd/posix.sh@60 -- # atime_if=1733545333 00:07:10.950 04:22:14 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.951 04:22:14 -- dd/posix.sh@61 -- # atime_of=1733545334 00:07:10.951 04:22:14 -- dd/posix.sh@66 -- # sleep 1 00:07:11.889 04:22:15 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.147 [2024-12-07 04:22:15.146425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.147 [2024-12-07 04:22:15.146527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58439 ] 00:07:12.147 [2024-12-07 04:22:15.274105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.147 [2024-12-07 04:22:15.320767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.147  [2024-12-07T04:22:15.645Z] Copying: 512/512 [B] (average 500 kBps) 00:07:12.405 00:07:12.405 04:22:15 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:12.405 04:22:15 -- dd/posix.sh@69 -- # (( atime_if == 1733545333 )) 00:07:12.405 04:22:15 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.405 04:22:15 -- dd/posix.sh@70 -- # (( atime_of == 1733545334 )) 00:07:12.405 04:22:15 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.405 [2024-12-07 04:22:15.585257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.405 [2024-12-07 04:22:15.585350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58451 ] 00:07:12.664 [2024-12-07 04:22:15.720754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.664 [2024-12-07 04:22:15.766899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.664  [2024-12-07T04:22:16.163Z] Copying: 512/512 [B] (average 500 kBps) 00:07:12.923 00:07:12.923 04:22:15 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:12.923 04:22:15 -- dd/posix.sh@73 -- # (( atime_if < 1733545335 )) 00:07:12.923 00:07:12.923 real 0m1.894s 00:07:12.923 user 0m0.468s 00:07:12.923 sys 0m0.189s 00:07:12.923 ************************************ 00:07:12.923 END TEST dd_flag_noatime_forced_aio 00:07:12.923 ************************************ 00:07:12.923 04:22:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.923 04:22:15 -- common/autotest_common.sh@10 -- # set +x 00:07:12.923 04:22:16 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:12.923 04:22:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.923 04:22:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.923 04:22:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.923 ************************************ 00:07:12.923 START TEST dd_flags_misc_forced_aio 00:07:12.923 ************************************ 00:07:12.923 04:22:16 -- common/autotest_common.sh@1114 -- # io 00:07:12.923 04:22:16 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:12.923 04:22:16 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:12.923 04:22:16 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:12.923 04:22:16 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.923 04:22:16 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.923 04:22:16 -- dd/common.sh@98 -- # xtrace_disable 00:07:12.923 04:22:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.923 04:22:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.923 04:22:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.923 [2024-12-07 04:22:16.083379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.923 [2024-12-07 04:22:16.083657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58472 ] 00:07:13.182 [2024-12-07 04:22:16.219723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.182 [2024-12-07 04:22:16.265819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.182  [2024-12-07T04:22:16.682Z] Copying: 512/512 [B] (average 500 kBps) 00:07:13.442 00:07:13.442 04:22:16 -- dd/posix.sh@93 -- # [[ a85ef58ku6g92g3kotyv2vahzo865u10vb6lk8xuauwcholf1fbhiimntmg2h8bwaynfbh7l2tv2iy95gc3dfdqpaqgcb0ijsupweblqfkfst1deh80s1fn4bpgv5abeusxpkrkh2q02442xxncapnyybyfhwzj5e5d1n9kay8fjgwkoiiax6pesiobc9t40z4pnlmbkc1wnf2zp6ckdzmc6shlzfq0nzm3kzpv4g3vwqizm1ywkb7i9mgaatquraol9p0jc0w4pnv2bgkmew29orlfkxz8epkjlyrhvduopmyd2xo7fdzae8pccyxooveozr8gibbkbup9du73xxtsslr6apo0cqmrscabaihchmhbkabb85q7ueq6q7a0xy7dqw56hgzb8tw0brffxngjtd9825umj25fkih9d2qkvgbiphowky8l70vziipzhzou74o5mkhjyhmxombeix7vxxxpam03erjyn8zae1up89afkgx61you8yzdld8kt == \a\8\5\e\f\5\8\k\u\6\g\9\2\g\3\k\o\t\y\v\2\v\a\h\z\o\8\6\5\u\1\0\v\b\6\l\k\8\x\u\a\u\w\c\h\o\l\f\1\f\b\h\i\i\m\n\t\m\g\2\h\8\b\w\a\y\n\f\b\h\7\l\2\t\v\2\i\y\9\5\g\c\3\d\f\d\q\p\a\q\g\c\b\0\i\j\s\u\p\w\e\b\l\q\f\k\f\s\t\1\d\e\h\8\0\s\1\f\n\4\b\p\g\v\5\a\b\e\u\s\x\p\k\r\k\h\2\q\0\2\4\4\2\x\x\n\c\a\p\n\y\y\b\y\f\h\w\z\j\5\e\5\d\1\n\9\k\a\y\8\f\j\g\w\k\o\i\i\a\x\6\p\e\s\i\o\b\c\9\t\4\0\z\4\p\n\l\m\b\k\c\1\w\n\f\2\z\p\6\c\k\d\z\m\c\6\s\h\l\z\f\q\0\n\z\m\3\k\z\p\v\4\g\3\v\w\q\i\z\m\1\y\w\k\b\7\i\9\m\g\a\a\t\q\u\r\a\o\l\9\p\0\j\c\0\w\4\p\n\v\2\b\g\k\m\e\w\2\9\o\r\l\f\k\x\z\8\e\p\k\j\l\y\r\h\v\d\u\o\p\m\y\d\2\x\o\7\f\d\z\a\e\8\p\c\c\y\x\o\o\v\e\o\z\r\8\g\i\b\b\k\b\u\p\9\d\u\7\3\x\x\t\s\s\l\r\6\a\p\o\0\c\q\m\r\s\c\a\b\a\i\h\c\h\m\h\b\k\a\b\b\8\5\q\7\u\e\q\6\q\7\a\0\x\y\7\d\q\w\5\6\h\g\z\b\8\t\w\0\b\r\f\f\x\n\g\j\t\d\9\8\2\5\u\m\j\2\5\f\k\i\h\9\d\2\q\k\v\g\b\i\p\h\o\w\k\y\8\l\7\0\v\z\i\i\p\z\h\z\o\u\7\4\o\5\m\k\h\j\y\h\m\x\o\m\b\e\i\x\7\v\x\x\x\p\a\m\0\3\e\r\j\y\n\8\z\a\e\1\u\p\8\9\a\f\k\g\x\6\1\y\o\u\8\y\z\d\l\d\8\k\t ]] 00:07:13.442 04:22:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.442 04:22:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:13.442 [2024-12-07 04:22:16.529703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.442 [2024-12-07 04:22:16.530197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58479 ] 00:07:13.442 [2024-12-07 04:22:16.666884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.701 [2024-12-07 04:22:16.716844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.701  [2024-12-07T04:22:16.941Z] Copying: 512/512 [B] (average 500 kBps) 00:07:13.701 00:07:13.701 04:22:16 -- dd/posix.sh@93 -- # [[ a85ef58ku6g92g3kotyv2vahzo865u10vb6lk8xuauwcholf1fbhiimntmg2h8bwaynfbh7l2tv2iy95gc3dfdqpaqgcb0ijsupweblqfkfst1deh80s1fn4bpgv5abeusxpkrkh2q02442xxncapnyybyfhwzj5e5d1n9kay8fjgwkoiiax6pesiobc9t40z4pnlmbkc1wnf2zp6ckdzmc6shlzfq0nzm3kzpv4g3vwqizm1ywkb7i9mgaatquraol9p0jc0w4pnv2bgkmew29orlfkxz8epkjlyrhvduopmyd2xo7fdzae8pccyxooveozr8gibbkbup9du73xxtsslr6apo0cqmrscabaihchmhbkabb85q7ueq6q7a0xy7dqw56hgzb8tw0brffxngjtd9825umj25fkih9d2qkvgbiphowky8l70vziipzhzou74o5mkhjyhmxombeix7vxxxpam03erjyn8zae1up89afkgx61you8yzdld8kt == \a\8\5\e\f\5\8\k\u\6\g\9\2\g\3\k\o\t\y\v\2\v\a\h\z\o\8\6\5\u\1\0\v\b\6\l\k\8\x\u\a\u\w\c\h\o\l\f\1\f\b\h\i\i\m\n\t\m\g\2\h\8\b\w\a\y\n\f\b\h\7\l\2\t\v\2\i\y\9\5\g\c\3\d\f\d\q\p\a\q\g\c\b\0\i\j\s\u\p\w\e\b\l\q\f\k\f\s\t\1\d\e\h\8\0\s\1\f\n\4\b\p\g\v\5\a\b\e\u\s\x\p\k\r\k\h\2\q\0\2\4\4\2\x\x\n\c\a\p\n\y\y\b\y\f\h\w\z\j\5\e\5\d\1\n\9\k\a\y\8\f\j\g\w\k\o\i\i\a\x\6\p\e\s\i\o\b\c\9\t\4\0\z\4\p\n\l\m\b\k\c\1\w\n\f\2\z\p\6\c\k\d\z\m\c\6\s\h\l\z\f\q\0\n\z\m\3\k\z\p\v\4\g\3\v\w\q\i\z\m\1\y\w\k\b\7\i\9\m\g\a\a\t\q\u\r\a\o\l\9\p\0\j\c\0\w\4\p\n\v\2\b\g\k\m\e\w\2\9\o\r\l\f\k\x\z\8\e\p\k\j\l\y\r\h\v\d\u\o\p\m\y\d\2\x\o\7\f\d\z\a\e\8\p\c\c\y\x\o\o\v\e\o\z\r\8\g\i\b\b\k\b\u\p\9\d\u\7\3\x\x\t\s\s\l\r\6\a\p\o\0\c\q\m\r\s\c\a\b\a\i\h\c\h\m\h\b\k\a\b\b\8\5\q\7\u\e\q\6\q\7\a\0\x\y\7\d\q\w\5\6\h\g\z\b\8\t\w\0\b\r\f\f\x\n\g\j\t\d\9\8\2\5\u\m\j\2\5\f\k\i\h\9\d\2\q\k\v\g\b\i\p\h\o\w\k\y\8\l\7\0\v\z\i\i\p\z\h\z\o\u\7\4\o\5\m\k\h\j\y\h\m\x\o\m\b\e\i\x\7\v\x\x\x\p\a\m\0\3\e\r\j\y\n\8\z\a\e\1\u\p\8\9\a\f\k\g\x\6\1\y\o\u\8\y\z\d\l\d\8\k\t ]] 00:07:13.701 04:22:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.701 04:22:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.961 [2024-12-07 04:22:16.970924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.961 [2024-12-07 04:22:16.971394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58487 ] 00:07:13.961 [2024-12-07 04:22:17.103568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.961 [2024-12-07 04:22:17.150443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.220  [2024-12-07T04:22:17.460Z] Copying: 512/512 [B] (average 166 kBps) 00:07:14.220 00:07:14.221 04:22:17 -- dd/posix.sh@93 -- # [[ a85ef58ku6g92g3kotyv2vahzo865u10vb6lk8xuauwcholf1fbhiimntmg2h8bwaynfbh7l2tv2iy95gc3dfdqpaqgcb0ijsupweblqfkfst1deh80s1fn4bpgv5abeusxpkrkh2q02442xxncapnyybyfhwzj5e5d1n9kay8fjgwkoiiax6pesiobc9t40z4pnlmbkc1wnf2zp6ckdzmc6shlzfq0nzm3kzpv4g3vwqizm1ywkb7i9mgaatquraol9p0jc0w4pnv2bgkmew29orlfkxz8epkjlyrhvduopmyd2xo7fdzae8pccyxooveozr8gibbkbup9du73xxtsslr6apo0cqmrscabaihchmhbkabb85q7ueq6q7a0xy7dqw56hgzb8tw0brffxngjtd9825umj25fkih9d2qkvgbiphowky8l70vziipzhzou74o5mkhjyhmxombeix7vxxxpam03erjyn8zae1up89afkgx61you8yzdld8kt == \a\8\5\e\f\5\8\k\u\6\g\9\2\g\3\k\o\t\y\v\2\v\a\h\z\o\8\6\5\u\1\0\v\b\6\l\k\8\x\u\a\u\w\c\h\o\l\f\1\f\b\h\i\i\m\n\t\m\g\2\h\8\b\w\a\y\n\f\b\h\7\l\2\t\v\2\i\y\9\5\g\c\3\d\f\d\q\p\a\q\g\c\b\0\i\j\s\u\p\w\e\b\l\q\f\k\f\s\t\1\d\e\h\8\0\s\1\f\n\4\b\p\g\v\5\a\b\e\u\s\x\p\k\r\k\h\2\q\0\2\4\4\2\x\x\n\c\a\p\n\y\y\b\y\f\h\w\z\j\5\e\5\d\1\n\9\k\a\y\8\f\j\g\w\k\o\i\i\a\x\6\p\e\s\i\o\b\c\9\t\4\0\z\4\p\n\l\m\b\k\c\1\w\n\f\2\z\p\6\c\k\d\z\m\c\6\s\h\l\z\f\q\0\n\z\m\3\k\z\p\v\4\g\3\v\w\q\i\z\m\1\y\w\k\b\7\i\9\m\g\a\a\t\q\u\r\a\o\l\9\p\0\j\c\0\w\4\p\n\v\2\b\g\k\m\e\w\2\9\o\r\l\f\k\x\z\8\e\p\k\j\l\y\r\h\v\d\u\o\p\m\y\d\2\x\o\7\f\d\z\a\e\8\p\c\c\y\x\o\o\v\e\o\z\r\8\g\i\b\b\k\b\u\p\9\d\u\7\3\x\x\t\s\s\l\r\6\a\p\o\0\c\q\m\r\s\c\a\b\a\i\h\c\h\m\h\b\k\a\b\b\8\5\q\7\u\e\q\6\q\7\a\0\x\y\7\d\q\w\5\6\h\g\z\b\8\t\w\0\b\r\f\f\x\n\g\j\t\d\9\8\2\5\u\m\j\2\5\f\k\i\h\9\d\2\q\k\v\g\b\i\p\h\o\w\k\y\8\l\7\0\v\z\i\i\p\z\h\z\o\u\7\4\o\5\m\k\h\j\y\h\m\x\o\m\b\e\i\x\7\v\x\x\x\p\a\m\0\3\e\r\j\y\n\8\z\a\e\1\u\p\8\9\a\f\k\g\x\6\1\y\o\u\8\y\z\d\l\d\8\k\t ]] 00:07:14.221 04:22:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.221 04:22:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.221 [2024-12-07 04:22:17.415485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.221 [2024-12-07 04:22:17.415578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58494 ] 00:07:14.480 [2024-12-07 04:22:17.551584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.480 [2024-12-07 04:22:17.606202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.480  [2024-12-07T04:22:17.979Z] Copying: 512/512 [B] (average 500 kBps) 00:07:14.739 00:07:14.739 04:22:17 -- dd/posix.sh@93 -- # [[ a85ef58ku6g92g3kotyv2vahzo865u10vb6lk8xuauwcholf1fbhiimntmg2h8bwaynfbh7l2tv2iy95gc3dfdqpaqgcb0ijsupweblqfkfst1deh80s1fn4bpgv5abeusxpkrkh2q02442xxncapnyybyfhwzj5e5d1n9kay8fjgwkoiiax6pesiobc9t40z4pnlmbkc1wnf2zp6ckdzmc6shlzfq0nzm3kzpv4g3vwqizm1ywkb7i9mgaatquraol9p0jc0w4pnv2bgkmew29orlfkxz8epkjlyrhvduopmyd2xo7fdzae8pccyxooveozr8gibbkbup9du73xxtsslr6apo0cqmrscabaihchmhbkabb85q7ueq6q7a0xy7dqw56hgzb8tw0brffxngjtd9825umj25fkih9d2qkvgbiphowky8l70vziipzhzou74o5mkhjyhmxombeix7vxxxpam03erjyn8zae1up89afkgx61you8yzdld8kt == \a\8\5\e\f\5\8\k\u\6\g\9\2\g\3\k\o\t\y\v\2\v\a\h\z\o\8\6\5\u\1\0\v\b\6\l\k\8\x\u\a\u\w\c\h\o\l\f\1\f\b\h\i\i\m\n\t\m\g\2\h\8\b\w\a\y\n\f\b\h\7\l\2\t\v\2\i\y\9\5\g\c\3\d\f\d\q\p\a\q\g\c\b\0\i\j\s\u\p\w\e\b\l\q\f\k\f\s\t\1\d\e\h\8\0\s\1\f\n\4\b\p\g\v\5\a\b\e\u\s\x\p\k\r\k\h\2\q\0\2\4\4\2\x\x\n\c\a\p\n\y\y\b\y\f\h\w\z\j\5\e\5\d\1\n\9\k\a\y\8\f\j\g\w\k\o\i\i\a\x\6\p\e\s\i\o\b\c\9\t\4\0\z\4\p\n\l\m\b\k\c\1\w\n\f\2\z\p\6\c\k\d\z\m\c\6\s\h\l\z\f\q\0\n\z\m\3\k\z\p\v\4\g\3\v\w\q\i\z\m\1\y\w\k\b\7\i\9\m\g\a\a\t\q\u\r\a\o\l\9\p\0\j\c\0\w\4\p\n\v\2\b\g\k\m\e\w\2\9\o\r\l\f\k\x\z\8\e\p\k\j\l\y\r\h\v\d\u\o\p\m\y\d\2\x\o\7\f\d\z\a\e\8\p\c\c\y\x\o\o\v\e\o\z\r\8\g\i\b\b\k\b\u\p\9\d\u\7\3\x\x\t\s\s\l\r\6\a\p\o\0\c\q\m\r\s\c\a\b\a\i\h\c\h\m\h\b\k\a\b\b\8\5\q\7\u\e\q\6\q\7\a\0\x\y\7\d\q\w\5\6\h\g\z\b\8\t\w\0\b\r\f\f\x\n\g\j\t\d\9\8\2\5\u\m\j\2\5\f\k\i\h\9\d\2\q\k\v\g\b\i\p\h\o\w\k\y\8\l\7\0\v\z\i\i\p\z\h\z\o\u\7\4\o\5\m\k\h\j\y\h\m\x\o\m\b\e\i\x\7\v\x\x\x\p\a\m\0\3\e\r\j\y\n\8\z\a\e\1\u\p\8\9\a\f\k\g\x\6\1\y\o\u\8\y\z\d\l\d\8\k\t ]] 00:07:14.739 04:22:17 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:14.739 04:22:17 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:14.739 04:22:17 -- dd/common.sh@98 -- # xtrace_disable 00:07:14.739 04:22:17 -- common/autotest_common.sh@10 -- # set +x 00:07:14.739 04:22:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.739 04:22:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:14.739 [2024-12-07 04:22:17.878362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.739 [2024-12-07 04:22:17.878629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58502 ] 00:07:15.006 [2024-12-07 04:22:18.013345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.006 [2024-12-07 04:22:18.062776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.006  [2024-12-07T04:22:18.508Z] Copying: 512/512 [B] (average 500 kBps) 00:07:15.268 00:07:15.268 04:22:18 -- dd/posix.sh@93 -- # [[ 1ohncelhw0avk94hcsvdw0xeo03kgwolelru3uy5mv1qm6802adwemmfw9w84gc25fn9l6e2xw1uq5rg9p2y6qif5hywkjnzlv7qmdpmssjergxx5clwyaqfhc1x7j5wneagxhcbrjv1i6mu005220evopg0ufpz7zu49mr9xs5jjxy3yolvzzgjog54cv7ih01a6uzy9uh1sctf9ao0t3i4y7cn4nxe8tn36mp1dkjyodc1awitogunii55bli9iyhwqvq8m6haa6vmqsitkljvkbdeiept9xjnobuookz013keqhvzf7xca5c5p65ef627tdspisk4yot7izxp4mmoj82bstu4u6brg5t6qf49hfk7sqx9ati5skkzorlmgandeim55p2zp3af206tn2hnolbgcd2pki28cw1hprgu2yodkjuena4do5um4mgoye3s864epi1by04q3kyt0t1vx37p50j113wjtyg0lwvimmxwbqjwienv3f8rrli9 == \1\o\h\n\c\e\l\h\w\0\a\v\k\9\4\h\c\s\v\d\w\0\x\e\o\0\3\k\g\w\o\l\e\l\r\u\3\u\y\5\m\v\1\q\m\6\8\0\2\a\d\w\e\m\m\f\w\9\w\8\4\g\c\2\5\f\n\9\l\6\e\2\x\w\1\u\q\5\r\g\9\p\2\y\6\q\i\f\5\h\y\w\k\j\n\z\l\v\7\q\m\d\p\m\s\s\j\e\r\g\x\x\5\c\l\w\y\a\q\f\h\c\1\x\7\j\5\w\n\e\a\g\x\h\c\b\r\j\v\1\i\6\m\u\0\0\5\2\2\0\e\v\o\p\g\0\u\f\p\z\7\z\u\4\9\m\r\9\x\s\5\j\j\x\y\3\y\o\l\v\z\z\g\j\o\g\5\4\c\v\7\i\h\0\1\a\6\u\z\y\9\u\h\1\s\c\t\f\9\a\o\0\t\3\i\4\y\7\c\n\4\n\x\e\8\t\n\3\6\m\p\1\d\k\j\y\o\d\c\1\a\w\i\t\o\g\u\n\i\i\5\5\b\l\i\9\i\y\h\w\q\v\q\8\m\6\h\a\a\6\v\m\q\s\i\t\k\l\j\v\k\b\d\e\i\e\p\t\9\x\j\n\o\b\u\o\o\k\z\0\1\3\k\e\q\h\v\z\f\7\x\c\a\5\c\5\p\6\5\e\f\6\2\7\t\d\s\p\i\s\k\4\y\o\t\7\i\z\x\p\4\m\m\o\j\8\2\b\s\t\u\4\u\6\b\r\g\5\t\6\q\f\4\9\h\f\k\7\s\q\x\9\a\t\i\5\s\k\k\z\o\r\l\m\g\a\n\d\e\i\m\5\5\p\2\z\p\3\a\f\2\0\6\t\n\2\h\n\o\l\b\g\c\d\2\p\k\i\2\8\c\w\1\h\p\r\g\u\2\y\o\d\k\j\u\e\n\a\4\d\o\5\u\m\4\m\g\o\y\e\3\s\8\6\4\e\p\i\1\b\y\0\4\q\3\k\y\t\0\t\1\v\x\3\7\p\5\0\j\1\1\3\w\j\t\y\g\0\l\w\v\i\m\m\x\w\b\q\j\w\i\e\n\v\3\f\8\r\r\l\i\9 ]] 00:07:15.268 04:22:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.268 04:22:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:15.268 [2024-12-07 04:22:18.324265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.268 [2024-12-07 04:22:18.324557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58509 ] 00:07:15.268 [2024-12-07 04:22:18.460732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.526 [2024-12-07 04:22:18.508065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.526  [2024-12-07T04:22:18.766Z] Copying: 512/512 [B] (average 500 kBps) 00:07:15.526 00:07:15.526 04:22:18 -- dd/posix.sh@93 -- # [[ 1ohncelhw0avk94hcsvdw0xeo03kgwolelru3uy5mv1qm6802adwemmfw9w84gc25fn9l6e2xw1uq5rg9p2y6qif5hywkjnzlv7qmdpmssjergxx5clwyaqfhc1x7j5wneagxhcbrjv1i6mu005220evopg0ufpz7zu49mr9xs5jjxy3yolvzzgjog54cv7ih01a6uzy9uh1sctf9ao0t3i4y7cn4nxe8tn36mp1dkjyodc1awitogunii55bli9iyhwqvq8m6haa6vmqsitkljvkbdeiept9xjnobuookz013keqhvzf7xca5c5p65ef627tdspisk4yot7izxp4mmoj82bstu4u6brg5t6qf49hfk7sqx9ati5skkzorlmgandeim55p2zp3af206tn2hnolbgcd2pki28cw1hprgu2yodkjuena4do5um4mgoye3s864epi1by04q3kyt0t1vx37p50j113wjtyg0lwvimmxwbqjwienv3f8rrli9 == \1\o\h\n\c\e\l\h\w\0\a\v\k\9\4\h\c\s\v\d\w\0\x\e\o\0\3\k\g\w\o\l\e\l\r\u\3\u\y\5\m\v\1\q\m\6\8\0\2\a\d\w\e\m\m\f\w\9\w\8\4\g\c\2\5\f\n\9\l\6\e\2\x\w\1\u\q\5\r\g\9\p\2\y\6\q\i\f\5\h\y\w\k\j\n\z\l\v\7\q\m\d\p\m\s\s\j\e\r\g\x\x\5\c\l\w\y\a\q\f\h\c\1\x\7\j\5\w\n\e\a\g\x\h\c\b\r\j\v\1\i\6\m\u\0\0\5\2\2\0\e\v\o\p\g\0\u\f\p\z\7\z\u\4\9\m\r\9\x\s\5\j\j\x\y\3\y\o\l\v\z\z\g\j\o\g\5\4\c\v\7\i\h\0\1\a\6\u\z\y\9\u\h\1\s\c\t\f\9\a\o\0\t\3\i\4\y\7\c\n\4\n\x\e\8\t\n\3\6\m\p\1\d\k\j\y\o\d\c\1\a\w\i\t\o\g\u\n\i\i\5\5\b\l\i\9\i\y\h\w\q\v\q\8\m\6\h\a\a\6\v\m\q\s\i\t\k\l\j\v\k\b\d\e\i\e\p\t\9\x\j\n\o\b\u\o\o\k\z\0\1\3\k\e\q\h\v\z\f\7\x\c\a\5\c\5\p\6\5\e\f\6\2\7\t\d\s\p\i\s\k\4\y\o\t\7\i\z\x\p\4\m\m\o\j\8\2\b\s\t\u\4\u\6\b\r\g\5\t\6\q\f\4\9\h\f\k\7\s\q\x\9\a\t\i\5\s\k\k\z\o\r\l\m\g\a\n\d\e\i\m\5\5\p\2\z\p\3\a\f\2\0\6\t\n\2\h\n\o\l\b\g\c\d\2\p\k\i\2\8\c\w\1\h\p\r\g\u\2\y\o\d\k\j\u\e\n\a\4\d\o\5\u\m\4\m\g\o\y\e\3\s\8\6\4\e\p\i\1\b\y\0\4\q\3\k\y\t\0\t\1\v\x\3\7\p\5\0\j\1\1\3\w\j\t\y\g\0\l\w\v\i\m\m\x\w\b\q\j\w\i\e\n\v\3\f\8\r\r\l\i\9 ]] 00:07:15.526 04:22:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:15.526 04:22:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:15.785 [2024-12-07 04:22:18.779957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.785 [2024-12-07 04:22:18.780226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58517 ] 00:07:15.785 [2024-12-07 04:22:18.915715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.785 [2024-12-07 04:22:18.963154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.785  [2024-12-07T04:22:19.283Z] Copying: 512/512 [B] (average 500 kBps) 00:07:16.043 00:07:16.043 04:22:19 -- dd/posix.sh@93 -- # [[ 1ohncelhw0avk94hcsvdw0xeo03kgwolelru3uy5mv1qm6802adwemmfw9w84gc25fn9l6e2xw1uq5rg9p2y6qif5hywkjnzlv7qmdpmssjergxx5clwyaqfhc1x7j5wneagxhcbrjv1i6mu005220evopg0ufpz7zu49mr9xs5jjxy3yolvzzgjog54cv7ih01a6uzy9uh1sctf9ao0t3i4y7cn4nxe8tn36mp1dkjyodc1awitogunii55bli9iyhwqvq8m6haa6vmqsitkljvkbdeiept9xjnobuookz013keqhvzf7xca5c5p65ef627tdspisk4yot7izxp4mmoj82bstu4u6brg5t6qf49hfk7sqx9ati5skkzorlmgandeim55p2zp3af206tn2hnolbgcd2pki28cw1hprgu2yodkjuena4do5um4mgoye3s864epi1by04q3kyt0t1vx37p50j113wjtyg0lwvimmxwbqjwienv3f8rrli9 == \1\o\h\n\c\e\l\h\w\0\a\v\k\9\4\h\c\s\v\d\w\0\x\e\o\0\3\k\g\w\o\l\e\l\r\u\3\u\y\5\m\v\1\q\m\6\8\0\2\a\d\w\e\m\m\f\w\9\w\8\4\g\c\2\5\f\n\9\l\6\e\2\x\w\1\u\q\5\r\g\9\p\2\y\6\q\i\f\5\h\y\w\k\j\n\z\l\v\7\q\m\d\p\m\s\s\j\e\r\g\x\x\5\c\l\w\y\a\q\f\h\c\1\x\7\j\5\w\n\e\a\g\x\h\c\b\r\j\v\1\i\6\m\u\0\0\5\2\2\0\e\v\o\p\g\0\u\f\p\z\7\z\u\4\9\m\r\9\x\s\5\j\j\x\y\3\y\o\l\v\z\z\g\j\o\g\5\4\c\v\7\i\h\0\1\a\6\u\z\y\9\u\h\1\s\c\t\f\9\a\o\0\t\3\i\4\y\7\c\n\4\n\x\e\8\t\n\3\6\m\p\1\d\k\j\y\o\d\c\1\a\w\i\t\o\g\u\n\i\i\5\5\b\l\i\9\i\y\h\w\q\v\q\8\m\6\h\a\a\6\v\m\q\s\i\t\k\l\j\v\k\b\d\e\i\e\p\t\9\x\j\n\o\b\u\o\o\k\z\0\1\3\k\e\q\h\v\z\f\7\x\c\a\5\c\5\p\6\5\e\f\6\2\7\t\d\s\p\i\s\k\4\y\o\t\7\i\z\x\p\4\m\m\o\j\8\2\b\s\t\u\4\u\6\b\r\g\5\t\6\q\f\4\9\h\f\k\7\s\q\x\9\a\t\i\5\s\k\k\z\o\r\l\m\g\a\n\d\e\i\m\5\5\p\2\z\p\3\a\f\2\0\6\t\n\2\h\n\o\l\b\g\c\d\2\p\k\i\2\8\c\w\1\h\p\r\g\u\2\y\o\d\k\j\u\e\n\a\4\d\o\5\u\m\4\m\g\o\y\e\3\s\8\6\4\e\p\i\1\b\y\0\4\q\3\k\y\t\0\t\1\v\x\3\7\p\5\0\j\1\1\3\w\j\t\y\g\0\l\w\v\i\m\m\x\w\b\q\j\w\i\e\n\v\3\f\8\r\r\l\i\9 ]] 00:07:16.043 04:22:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:16.043 04:22:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:16.043 [2024-12-07 04:22:19.215421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.043 [2024-12-07 04:22:19.215531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58519 ] 00:07:16.302 [2024-12-07 04:22:19.351306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.302 [2024-12-07 04:22:19.402110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.302  [2024-12-07T04:22:19.802Z] Copying: 512/512 [B] (average 250 kBps) 00:07:16.562 00:07:16.562 ************************************ 00:07:16.562 END TEST dd_flags_misc_forced_aio 00:07:16.562 ************************************ 00:07:16.562 04:22:19 -- dd/posix.sh@93 -- # [[ 1ohncelhw0avk94hcsvdw0xeo03kgwolelru3uy5mv1qm6802adwemmfw9w84gc25fn9l6e2xw1uq5rg9p2y6qif5hywkjnzlv7qmdpmssjergxx5clwyaqfhc1x7j5wneagxhcbrjv1i6mu005220evopg0ufpz7zu49mr9xs5jjxy3yolvzzgjog54cv7ih01a6uzy9uh1sctf9ao0t3i4y7cn4nxe8tn36mp1dkjyodc1awitogunii55bli9iyhwqvq8m6haa6vmqsitkljvkbdeiept9xjnobuookz013keqhvzf7xca5c5p65ef627tdspisk4yot7izxp4mmoj82bstu4u6brg5t6qf49hfk7sqx9ati5skkzorlmgandeim55p2zp3af206tn2hnolbgcd2pki28cw1hprgu2yodkjuena4do5um4mgoye3s864epi1by04q3kyt0t1vx37p50j113wjtyg0lwvimmxwbqjwienv3f8rrli9 == \1\o\h\n\c\e\l\h\w\0\a\v\k\9\4\h\c\s\v\d\w\0\x\e\o\0\3\k\g\w\o\l\e\l\r\u\3\u\y\5\m\v\1\q\m\6\8\0\2\a\d\w\e\m\m\f\w\9\w\8\4\g\c\2\5\f\n\9\l\6\e\2\x\w\1\u\q\5\r\g\9\p\2\y\6\q\i\f\5\h\y\w\k\j\n\z\l\v\7\q\m\d\p\m\s\s\j\e\r\g\x\x\5\c\l\w\y\a\q\f\h\c\1\x\7\j\5\w\n\e\a\g\x\h\c\b\r\j\v\1\i\6\m\u\0\0\5\2\2\0\e\v\o\p\g\0\u\f\p\z\7\z\u\4\9\m\r\9\x\s\5\j\j\x\y\3\y\o\l\v\z\z\g\j\o\g\5\4\c\v\7\i\h\0\1\a\6\u\z\y\9\u\h\1\s\c\t\f\9\a\o\0\t\3\i\4\y\7\c\n\4\n\x\e\8\t\n\3\6\m\p\1\d\k\j\y\o\d\c\1\a\w\i\t\o\g\u\n\i\i\5\5\b\l\i\9\i\y\h\w\q\v\q\8\m\6\h\a\a\6\v\m\q\s\i\t\k\l\j\v\k\b\d\e\i\e\p\t\9\x\j\n\o\b\u\o\o\k\z\0\1\3\k\e\q\h\v\z\f\7\x\c\a\5\c\5\p\6\5\e\f\6\2\7\t\d\s\p\i\s\k\4\y\o\t\7\i\z\x\p\4\m\m\o\j\8\2\b\s\t\u\4\u\6\b\r\g\5\t\6\q\f\4\9\h\f\k\7\s\q\x\9\a\t\i\5\s\k\k\z\o\r\l\m\g\a\n\d\e\i\m\5\5\p\2\z\p\3\a\f\2\0\6\t\n\2\h\n\o\l\b\g\c\d\2\p\k\i\2\8\c\w\1\h\p\r\g\u\2\y\o\d\k\j\u\e\n\a\4\d\o\5\u\m\4\m\g\o\y\e\3\s\8\6\4\e\p\i\1\b\y\0\4\q\3\k\y\t\0\t\1\v\x\3\7\p\5\0\j\1\1\3\w\j\t\y\g\0\l\w\v\i\m\m\x\w\b\q\j\w\i\e\n\v\3\f\8\r\r\l\i\9 ]] 00:07:16.562 00:07:16.562 real 0m3.587s 00:07:16.562 user 0m1.905s 00:07:16.562 sys 0m0.711s 00:07:16.562 04:22:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.562 04:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.562 04:22:19 -- dd/posix.sh@1 -- # cleanup 00:07:16.562 04:22:19 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:16.562 04:22:19 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:16.562 ************************************ 00:07:16.562 END TEST spdk_dd_posix 00:07:16.562 ************************************ 00:07:16.562 00:07:16.562 real 0m16.981s 00:07:16.562 user 0m7.917s 00:07:16.562 sys 0m3.291s 00:07:16.562 04:22:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.562 04:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.562 04:22:19 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:16.562 04:22:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.562 04:22:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.562 04:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.562 ************************************ 00:07:16.562 START TEST spdk_dd_malloc 00:07:16.562 ************************************ 00:07:16.562 04:22:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:16.562 * Looking for test storage... 00:07:16.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:16.562 04:22:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:16.822 04:22:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:16.822 04:22:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:16.822 04:22:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:16.822 04:22:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:16.822 04:22:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:16.822 04:22:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:16.822 04:22:19 -- scripts/common.sh@335 -- # IFS=.-: 00:07:16.822 04:22:19 -- scripts/common.sh@335 -- # read -ra ver1 00:07:16.822 04:22:19 -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.822 04:22:19 -- scripts/common.sh@336 -- # read -ra ver2 00:07:16.822 04:22:19 -- scripts/common.sh@337 -- # local 'op=<' 00:07:16.822 04:22:19 -- scripts/common.sh@339 -- # ver1_l=2 00:07:16.822 04:22:19 -- scripts/common.sh@340 -- # ver2_l=1 00:07:16.822 04:22:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:16.822 04:22:19 -- scripts/common.sh@343 -- # case "$op" in 00:07:16.822 04:22:19 -- scripts/common.sh@344 -- # : 1 00:07:16.822 04:22:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:16.822 04:22:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.822 04:22:19 -- scripts/common.sh@364 -- # decimal 1 00:07:16.822 04:22:19 -- scripts/common.sh@352 -- # local d=1 00:07:16.822 04:22:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.822 04:22:19 -- scripts/common.sh@354 -- # echo 1 00:07:16.822 04:22:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:16.822 04:22:19 -- scripts/common.sh@365 -- # decimal 2 00:07:16.822 04:22:19 -- scripts/common.sh@352 -- # local d=2 00:07:16.822 04:22:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.822 04:22:19 -- scripts/common.sh@354 -- # echo 2 00:07:16.822 04:22:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:16.822 04:22:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:16.822 04:22:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:16.822 04:22:19 -- scripts/common.sh@367 -- # return 0 00:07:16.822 04:22:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.822 04:22:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:16.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.822 --rc genhtml_branch_coverage=1 00:07:16.822 --rc genhtml_function_coverage=1 00:07:16.822 --rc genhtml_legend=1 00:07:16.822 --rc geninfo_all_blocks=1 00:07:16.822 --rc geninfo_unexecuted_blocks=1 00:07:16.822 00:07:16.822 ' 00:07:16.822 04:22:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:16.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.822 --rc genhtml_branch_coverage=1 00:07:16.822 --rc genhtml_function_coverage=1 00:07:16.822 --rc genhtml_legend=1 00:07:16.822 --rc geninfo_all_blocks=1 00:07:16.822 --rc geninfo_unexecuted_blocks=1 00:07:16.822 00:07:16.822 ' 00:07:16.822 04:22:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:16.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.823 --rc genhtml_branch_coverage=1 00:07:16.823 --rc genhtml_function_coverage=1 00:07:16.823 --rc genhtml_legend=1 00:07:16.823 --rc geninfo_all_blocks=1 00:07:16.823 --rc geninfo_unexecuted_blocks=1 00:07:16.823 00:07:16.823 ' 00:07:16.823 04:22:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:16.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.823 --rc genhtml_branch_coverage=1 00:07:16.823 --rc genhtml_function_coverage=1 00:07:16.823 --rc genhtml_legend=1 00:07:16.823 --rc geninfo_all_blocks=1 00:07:16.823 --rc geninfo_unexecuted_blocks=1 00:07:16.823 00:07:16.823 ' 00:07:16.823 04:22:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.823 04:22:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.823 04:22:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.823 04:22:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.823 04:22:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.823 04:22:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.823 04:22:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.823 04:22:19 -- paths/export.sh@5 -- # export PATH 00:07:16.823 04:22:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.823 04:22:19 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:16.823 04:22:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.823 04:22:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.823 04:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.823 ************************************ 00:07:16.823 START TEST dd_malloc_copy 00:07:16.823 ************************************ 00:07:16.823 04:22:19 -- common/autotest_common.sh@1114 -- # malloc_copy 00:07:16.823 04:22:19 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:16.823 04:22:19 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:16.823 04:22:19 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:16.823 04:22:19 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:16.823 04:22:19 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:16.823 04:22:19 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:16.823 04:22:19 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:16.823 04:22:19 -- dd/malloc.sh@28 -- # gen_conf 00:07:16.823 04:22:19 -- dd/common.sh@31 -- # xtrace_disable 00:07:16.823 04:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.823 [2024-12-07 04:22:19.946022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.823 [2024-12-07 04:22:19.946277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58600 ] 00:07:16.823 { 00:07:16.823 "subsystems": [ 00:07:16.823 { 00:07:16.823 "subsystem": "bdev", 00:07:16.823 "config": [ 00:07:16.823 { 00:07:16.823 "params": { 00:07:16.823 "block_size": 512, 00:07:16.823 "num_blocks": 1048576, 00:07:16.823 "name": "malloc0" 00:07:16.823 }, 00:07:16.823 "method": "bdev_malloc_create" 00:07:16.823 }, 00:07:16.823 { 00:07:16.823 "params": { 00:07:16.823 "block_size": 512, 00:07:16.823 "num_blocks": 1048576, 00:07:16.823 "name": "malloc1" 00:07:16.823 }, 00:07:16.823 "method": "bdev_malloc_create" 00:07:16.823 }, 00:07:16.823 { 00:07:16.823 "method": "bdev_wait_for_examine" 00:07:16.823 } 00:07:16.823 ] 00:07:16.823 } 00:07:16.823 ] 00:07:16.823 } 00:07:17.082 [2024-12-07 04:22:20.084738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.082 [2024-12-07 04:22:20.137737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.456  [2024-12-07T04:22:22.674Z] Copying: 246/512 [MB] (246 MBps) [2024-12-07T04:22:22.674Z] Copying: 490/512 [MB] (243 MBps) [2024-12-07T04:22:22.934Z] Copying: 512/512 [MB] (average 245 MBps) 00:07:19.694 00:07:19.694 04:22:22 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:19.694 04:22:22 -- dd/malloc.sh@33 -- # gen_conf 00:07:19.694 04:22:22 -- dd/common.sh@31 -- # xtrace_disable 00:07:19.694 04:22:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.694 [2024-12-07 04:22:22.828223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.694 [2024-12-07 04:22:22.828475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:07:19.694 { 00:07:19.694 "subsystems": [ 00:07:19.694 { 00:07:19.694 "subsystem": "bdev", 00:07:19.694 "config": [ 00:07:19.694 { 00:07:19.694 "params": { 00:07:19.694 "block_size": 512, 00:07:19.694 "num_blocks": 1048576, 00:07:19.694 "name": "malloc0" 00:07:19.694 }, 00:07:19.694 "method": "bdev_malloc_create" 00:07:19.694 }, 00:07:19.694 { 00:07:19.694 "params": { 00:07:19.694 "block_size": 512, 00:07:19.694 "num_blocks": 1048576, 00:07:19.694 "name": "malloc1" 00:07:19.694 }, 00:07:19.694 "method": "bdev_malloc_create" 00:07:19.694 }, 00:07:19.694 { 00:07:19.694 "method": "bdev_wait_for_examine" 00:07:19.694 } 00:07:19.694 ] 00:07:19.694 } 00:07:19.694 ] 00:07:19.694 } 00:07:19.953 [2024-12-07 04:22:22.962991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.953 [2024-12-07 04:22:23.009295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.332  [2024-12-07T04:22:25.511Z] Copying: 246/512 [MB] (246 MBps) [2024-12-07T04:22:25.511Z] Copying: 491/512 [MB] (245 MBps) [2024-12-07T04:22:25.769Z] Copying: 512/512 [MB] (average 246 MBps) 00:07:22.529 00:07:22.529 ************************************ 00:07:22.529 END TEST dd_malloc_copy 00:07:22.529 ************************************ 00:07:22.529 00:07:22.529 real 0m5.731s 00:07:22.529 user 0m5.112s 00:07:22.529 sys 0m0.470s 00:07:22.529 04:22:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.529 04:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:22.529 ************************************ 00:07:22.529 END TEST spdk_dd_malloc 00:07:22.529 ************************************ 00:07:22.529 00:07:22.529 real 0m5.955s 00:07:22.529 user 0m5.234s 00:07:22.529 sys 0m0.573s 00:07:22.529 04:22:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.529 04:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:22.529 04:22:25 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:22.529 04:22:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:22.529 04:22:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.529 04:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:22.529 ************************************ 00:07:22.529 START TEST spdk_dd_bdev_to_bdev 00:07:22.529 ************************************ 00:07:22.529 04:22:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:22.788 * Looking for test storage... 00:07:22.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:22.788 04:22:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.788 04:22:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.788 04:22:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.788 04:22:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.788 04:22:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.788 04:22:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.788 04:22:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.788 04:22:25 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.788 04:22:25 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.788 04:22:25 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.788 04:22:25 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.788 04:22:25 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.788 04:22:25 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.788 04:22:25 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.788 04:22:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.788 04:22:25 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.788 04:22:25 -- scripts/common.sh@344 -- # : 1 00:07:22.788 04:22:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.788 04:22:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.788 04:22:25 -- scripts/common.sh@364 -- # decimal 1 00:07:22.788 04:22:25 -- scripts/common.sh@352 -- # local d=1 00:07:22.788 04:22:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.788 04:22:25 -- scripts/common.sh@354 -- # echo 1 00:07:22.788 04:22:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.788 04:22:25 -- scripts/common.sh@365 -- # decimal 2 00:07:22.788 04:22:25 -- scripts/common.sh@352 -- # local d=2 00:07:22.788 04:22:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.788 04:22:25 -- scripts/common.sh@354 -- # echo 2 00:07:22.788 04:22:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.788 04:22:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.788 04:22:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.788 04:22:25 -- scripts/common.sh@367 -- # return 0 00:07:22.788 04:22:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.788 04:22:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.788 --rc genhtml_branch_coverage=1 00:07:22.788 --rc genhtml_function_coverage=1 00:07:22.788 --rc genhtml_legend=1 00:07:22.788 --rc geninfo_all_blocks=1 00:07:22.788 --rc geninfo_unexecuted_blocks=1 00:07:22.788 00:07:22.788 ' 00:07:22.788 04:22:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.788 --rc genhtml_branch_coverage=1 00:07:22.788 --rc genhtml_function_coverage=1 00:07:22.788 --rc genhtml_legend=1 00:07:22.788 --rc geninfo_all_blocks=1 00:07:22.788 --rc geninfo_unexecuted_blocks=1 00:07:22.788 00:07:22.788 ' 00:07:22.788 04:22:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.788 --rc genhtml_branch_coverage=1 00:07:22.788 --rc genhtml_function_coverage=1 00:07:22.788 --rc genhtml_legend=1 00:07:22.788 --rc geninfo_all_blocks=1 00:07:22.788 --rc geninfo_unexecuted_blocks=1 00:07:22.788 00:07:22.788 ' 00:07:22.788 04:22:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.788 --rc genhtml_branch_coverage=1 00:07:22.788 --rc genhtml_function_coverage=1 00:07:22.788 --rc genhtml_legend=1 00:07:22.788 --rc geninfo_all_blocks=1 00:07:22.788 --rc geninfo_unexecuted_blocks=1 00:07:22.788 00:07:22.788 ' 00:07:22.788 04:22:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.788 04:22:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.788 04:22:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.788 04:22:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.788 04:22:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.788 04:22:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.788 04:22:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.788 04:22:25 -- paths/export.sh@5 -- # export PATH 00:07:22.788 04:22:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:22.788 04:22:25 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:22.788 04:22:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:22.788 04:22:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.788 04:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:22.788 ************************************ 00:07:22.788 START TEST dd_inflate_file 00:07:22.788 ************************************ 00:07:22.788 04:22:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:22.788 [2024-12-07 04:22:25.956888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.788 [2024-12-07 04:22:25.956987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58748 ] 00:07:23.047 [2024-12-07 04:22:26.092870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.047 [2024-12-07 04:22:26.140958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.047  [2024-12-07T04:22:26.546Z] Copying: 64/64 [MB] (average 2133 MBps) 00:07:23.306 00:07:23.306 00:07:23.306 real 0m0.465s 00:07:23.306 user 0m0.234s 00:07:23.306 sys 0m0.115s 00:07:23.306 04:22:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.306 ************************************ 00:07:23.306 END TEST dd_inflate_file 00:07:23.306 ************************************ 00:07:23.306 04:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.306 04:22:26 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:23.306 04:22:26 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:23.306 04:22:26 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:23.306 04:22:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:23.306 04:22:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.306 04:22:26 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:23.306 04:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.306 04:22:26 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.306 04:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.306 ************************************ 00:07:23.306 START TEST dd_copy_to_out_bdev 00:07:23.306 ************************************ 00:07:23.306 04:22:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:23.306 { 00:07:23.306 "subsystems": [ 00:07:23.306 { 00:07:23.306 "subsystem": "bdev", 00:07:23.306 "config": [ 00:07:23.306 { 00:07:23.306 "params": { 00:07:23.306 "trtype": "pcie", 00:07:23.306 "traddr": "0000:00:06.0", 00:07:23.306 "name": "Nvme0" 00:07:23.306 }, 00:07:23.306 "method": "bdev_nvme_attach_controller" 00:07:23.306 }, 00:07:23.306 { 00:07:23.306 "params": { 00:07:23.306 "trtype": "pcie", 00:07:23.306 "traddr": "0000:00:07.0", 00:07:23.306 "name": "Nvme1" 00:07:23.306 }, 00:07:23.306 "method": "bdev_nvme_attach_controller" 00:07:23.306 }, 00:07:23.306 { 00:07:23.306 "method": "bdev_wait_for_examine" 00:07:23.306 } 00:07:23.306 ] 00:07:23.306 } 00:07:23.306 ] 00:07:23.306 } 00:07:23.306 [2024-12-07 04:22:26.481352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.306 [2024-12-07 04:22:26.481951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58774 ] 00:07:23.565 [2024-12-07 04:22:26.619437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.565 [2024-12-07 04:22:26.665810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.944  [2024-12-07T04:22:28.184Z] Copying: 50/64 [MB] (50 MBps) [2024-12-07T04:22:28.444Z] Copying: 64/64 [MB] (average 50 MBps) 00:07:25.204 00:07:25.204 00:07:25.204 real 0m1.853s 00:07:25.204 user 0m1.635s 00:07:25.204 sys 0m0.151s 00:07:25.204 04:22:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.204 04:22:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.204 ************************************ 00:07:25.204 END TEST dd_copy_to_out_bdev 00:07:25.204 ************************************ 00:07:25.204 04:22:28 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:25.204 04:22:28 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:25.204 04:22:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.204 04:22:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.204 04:22:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.204 ************************************ 00:07:25.204 START TEST dd_offset_magic 00:07:25.204 ************************************ 00:07:25.204 04:22:28 -- common/autotest_common.sh@1114 -- # offset_magic 00:07:25.204 04:22:28 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:25.204 04:22:28 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:25.204 04:22:28 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:25.204 04:22:28 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:25.204 04:22:28 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:25.205 04:22:28 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:25.205 04:22:28 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.205 04:22:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.205 [2024-12-07 04:22:28.392668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.205 [2024-12-07 04:22:28.392753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58818 ] 00:07:25.205 { 00:07:25.205 "subsystems": [ 00:07:25.205 { 00:07:25.205 "subsystem": "bdev", 00:07:25.205 "config": [ 00:07:25.205 { 00:07:25.205 "params": { 00:07:25.205 "trtype": "pcie", 00:07:25.205 "traddr": "0000:00:06.0", 00:07:25.205 "name": "Nvme0" 00:07:25.205 }, 00:07:25.205 "method": "bdev_nvme_attach_controller" 00:07:25.205 }, 00:07:25.205 { 00:07:25.205 "params": { 00:07:25.205 "trtype": "pcie", 00:07:25.205 "traddr": "0000:00:07.0", 00:07:25.205 "name": "Nvme1" 00:07:25.205 }, 00:07:25.205 "method": "bdev_nvme_attach_controller" 00:07:25.205 }, 00:07:25.205 { 00:07:25.205 "method": "bdev_wait_for_examine" 00:07:25.205 } 00:07:25.205 ] 00:07:25.205 } 00:07:25.205 ] 00:07:25.205 } 00:07:25.465 [2024-12-07 04:22:28.528473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.465 [2024-12-07 04:22:28.577475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.724  [2024-12-07T04:22:29.224Z] Copying: 65/65 [MB] (average 970 MBps) 00:07:25.984 00:07:25.984 04:22:29 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:25.984 04:22:29 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:25.984 04:22:29 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.984 04:22:29 -- common/autotest_common.sh@10 -- # set +x 00:07:25.984 [2024-12-07 04:22:29.047796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.984 [2024-12-07 04:22:29.047873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58838 ] 00:07:25.984 { 00:07:25.984 "subsystems": [ 00:07:25.984 { 00:07:25.984 "subsystem": "bdev", 00:07:25.984 "config": [ 00:07:25.984 { 00:07:25.984 "params": { 00:07:25.984 "trtype": "pcie", 00:07:25.984 "traddr": "0000:00:06.0", 00:07:25.984 "name": "Nvme0" 00:07:25.984 }, 00:07:25.984 "method": "bdev_nvme_attach_controller" 00:07:25.984 }, 00:07:25.984 { 00:07:25.984 "params": { 00:07:25.984 "trtype": "pcie", 00:07:25.984 "traddr": "0000:00:07.0", 00:07:25.984 "name": "Nvme1" 00:07:25.984 }, 00:07:25.984 "method": "bdev_nvme_attach_controller" 00:07:25.984 }, 00:07:25.984 { 00:07:25.984 "method": "bdev_wait_for_examine" 00:07:25.984 } 00:07:25.984 ] 00:07:25.984 } 00:07:25.984 ] 00:07:25.984 } 00:07:25.984 [2024-12-07 04:22:29.175293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.243 [2024-12-07 04:22:29.226332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.243  [2024-12-07T04:22:29.743Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:26.503 00:07:26.503 04:22:29 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:26.503 04:22:29 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:26.503 04:22:29 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:26.503 04:22:29 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:26.503 04:22:29 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:26.503 04:22:29 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.503 04:22:29 -- common/autotest_common.sh@10 -- # set +x 00:07:26.503 [2024-12-07 04:22:29.613062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.503 [2024-12-07 04:22:29.613146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58847 ] 00:07:26.503 { 00:07:26.503 "subsystems": [ 00:07:26.503 { 00:07:26.503 "subsystem": "bdev", 00:07:26.503 "config": [ 00:07:26.503 { 00:07:26.503 "params": { 00:07:26.503 "trtype": "pcie", 00:07:26.503 "traddr": "0000:00:06.0", 00:07:26.503 "name": "Nvme0" 00:07:26.503 }, 00:07:26.503 "method": "bdev_nvme_attach_controller" 00:07:26.503 }, 00:07:26.503 { 00:07:26.503 "params": { 00:07:26.503 "trtype": "pcie", 00:07:26.503 "traddr": "0000:00:07.0", 00:07:26.503 "name": "Nvme1" 00:07:26.503 }, 00:07:26.503 "method": "bdev_nvme_attach_controller" 00:07:26.503 }, 00:07:26.503 { 00:07:26.503 "method": "bdev_wait_for_examine" 00:07:26.503 } 00:07:26.503 ] 00:07:26.503 } 00:07:26.503 ] 00:07:26.503 } 00:07:26.503 [2024-12-07 04:22:29.740858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.762 [2024-12-07 04:22:29.790663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.028  [2024-12-07T04:22:30.268Z] Copying: 65/65 [MB] (average 1065 MBps) 00:07:27.028 00:07:27.028 04:22:30 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:27.028 04:22:30 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:27.028 04:22:30 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.028 04:22:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.287 [2024-12-07 04:22:30.282397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.287 [2024-12-07 04:22:30.282497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58867 ] 00:07:27.287 { 00:07:27.287 "subsystems": [ 00:07:27.287 { 00:07:27.287 "subsystem": "bdev", 00:07:27.287 "config": [ 00:07:27.287 { 00:07:27.287 "params": { 00:07:27.287 "trtype": "pcie", 00:07:27.287 "traddr": "0000:00:06.0", 00:07:27.287 "name": "Nvme0" 00:07:27.287 }, 00:07:27.287 "method": "bdev_nvme_attach_controller" 00:07:27.287 }, 00:07:27.287 { 00:07:27.287 "params": { 00:07:27.287 "trtype": "pcie", 00:07:27.287 "traddr": "0000:00:07.0", 00:07:27.287 "name": "Nvme1" 00:07:27.287 }, 00:07:27.287 "method": "bdev_nvme_attach_controller" 00:07:27.287 }, 00:07:27.287 { 00:07:27.287 "method": "bdev_wait_for_examine" 00:07:27.287 } 00:07:27.287 ] 00:07:27.287 } 00:07:27.287 ] 00:07:27.287 } 00:07:27.287 [2024-12-07 04:22:30.419294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.287 [2024-12-07 04:22:30.468940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.546  [2024-12-07T04:22:31.045Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:27.805 00:07:27.805 04:22:30 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:27.805 04:22:30 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:27.805 00:07:27.805 real 0m2.473s 00:07:27.805 user 0m1.847s 00:07:27.805 sys 0m0.435s 00:07:27.805 ************************************ 00:07:27.805 END TEST dd_offset_magic 00:07:27.805 ************************************ 00:07:27.805 04:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.805 04:22:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.805 04:22:30 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:27.805 04:22:30 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:27.805 04:22:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.805 04:22:30 -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.805 04:22:30 -- dd/common.sh@12 -- # local size=4194330 00:07:27.805 04:22:30 -- dd/common.sh@14 -- # local bs=1048576 00:07:27.805 04:22:30 -- dd/common.sh@15 -- # local count=5 00:07:27.805 04:22:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:27.805 04:22:30 -- dd/common.sh@18 -- # gen_conf 00:07:27.805 04:22:30 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.805 04:22:30 -- common/autotest_common.sh@10 -- # set +x 00:07:27.805 [2024-12-07 04:22:30.911619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.805 [2024-12-07 04:22:30.911876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58897 ] 00:07:27.805 { 00:07:27.805 "subsystems": [ 00:07:27.805 { 00:07:27.805 "subsystem": "bdev", 00:07:27.805 "config": [ 00:07:27.805 { 00:07:27.805 "params": { 00:07:27.805 "trtype": "pcie", 00:07:27.805 "traddr": "0000:00:06.0", 00:07:27.805 "name": "Nvme0" 00:07:27.805 }, 00:07:27.805 "method": "bdev_nvme_attach_controller" 00:07:27.805 }, 00:07:27.805 { 00:07:27.805 "params": { 00:07:27.805 "trtype": "pcie", 00:07:27.805 "traddr": "0000:00:07.0", 00:07:27.806 "name": "Nvme1" 00:07:27.806 }, 00:07:27.806 "method": "bdev_nvme_attach_controller" 00:07:27.806 }, 00:07:27.806 { 00:07:27.806 "method": "bdev_wait_for_examine" 00:07:27.806 } 00:07:27.806 ] 00:07:27.806 } 00:07:27.806 ] 00:07:27.806 } 00:07:28.065 [2024-12-07 04:22:31.048879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.065 [2024-12-07 04:22:31.096003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.065  [2024-12-07T04:22:31.565Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:28.325 00:07:28.325 04:22:31 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:28.325 04:22:31 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:28.325 04:22:31 -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.325 04:22:31 -- dd/common.sh@12 -- # local size=4194330 00:07:28.325 04:22:31 -- dd/common.sh@14 -- # local bs=1048576 00:07:28.325 04:22:31 -- dd/common.sh@15 -- # local count=5 00:07:28.325 04:22:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:28.325 04:22:31 -- dd/common.sh@18 -- # gen_conf 00:07:28.325 04:22:31 -- dd/common.sh@31 -- # xtrace_disable 00:07:28.325 04:22:31 -- common/autotest_common.sh@10 -- # set +x 00:07:28.325 [2024-12-07 04:22:31.494783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.325 [2024-12-07 04:22:31.494874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:07:28.325 { 00:07:28.325 "subsystems": [ 00:07:28.325 { 00:07:28.325 "subsystem": "bdev", 00:07:28.325 "config": [ 00:07:28.325 { 00:07:28.325 "params": { 00:07:28.325 "trtype": "pcie", 00:07:28.325 "traddr": "0000:00:06.0", 00:07:28.325 "name": "Nvme0" 00:07:28.325 }, 00:07:28.325 "method": "bdev_nvme_attach_controller" 00:07:28.325 }, 00:07:28.325 { 00:07:28.325 "params": { 00:07:28.325 "trtype": "pcie", 00:07:28.325 "traddr": "0000:00:07.0", 00:07:28.325 "name": "Nvme1" 00:07:28.325 }, 00:07:28.325 "method": "bdev_nvme_attach_controller" 00:07:28.325 }, 00:07:28.325 { 00:07:28.325 "method": "bdev_wait_for_examine" 00:07:28.325 } 00:07:28.325 ] 00:07:28.325 } 00:07:28.325 ] 00:07:28.325 } 00:07:28.585 [2024-12-07 04:22:31.631568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.585 [2024-12-07 04:22:31.679710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.845  [2024-12-07T04:22:32.085Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:28.845 00:07:28.845 04:22:32 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:28.845 ************************************ 00:07:28.845 END TEST spdk_dd_bdev_to_bdev 00:07:28.845 ************************************ 00:07:28.845 00:07:28.845 real 0m6.324s 00:07:28.845 user 0m4.719s 00:07:28.845 sys 0m1.112s 00:07:28.845 04:22:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.845 04:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.105 04:22:32 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:29.105 04:22:32 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:29.105 04:22:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.105 04:22:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.105 04:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.105 ************************************ 00:07:29.105 START TEST spdk_dd_uring 00:07:29.105 ************************************ 00:07:29.105 04:22:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:29.105 * Looking for test storage... 00:07:29.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:29.105 04:22:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:29.105 04:22:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:29.105 04:22:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:29.105 04:22:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:29.105 04:22:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:29.105 04:22:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:29.105 04:22:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:29.105 04:22:32 -- scripts/common.sh@335 -- # IFS=.-: 00:07:29.105 04:22:32 -- scripts/common.sh@335 -- # read -ra ver1 00:07:29.105 04:22:32 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.105 04:22:32 -- scripts/common.sh@336 -- # read -ra ver2 00:07:29.105 04:22:32 -- scripts/common.sh@337 -- # local 'op=<' 00:07:29.105 04:22:32 -- scripts/common.sh@339 -- # ver1_l=2 00:07:29.105 04:22:32 -- scripts/common.sh@340 -- # ver2_l=1 00:07:29.105 04:22:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:29.105 04:22:32 -- scripts/common.sh@343 -- # case "$op" in 00:07:29.105 04:22:32 -- scripts/common.sh@344 -- # : 1 00:07:29.105 04:22:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:29.105 04:22:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.105 04:22:32 -- scripts/common.sh@364 -- # decimal 1 00:07:29.105 04:22:32 -- scripts/common.sh@352 -- # local d=1 00:07:29.105 04:22:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.105 04:22:32 -- scripts/common.sh@354 -- # echo 1 00:07:29.105 04:22:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:29.105 04:22:32 -- scripts/common.sh@365 -- # decimal 2 00:07:29.105 04:22:32 -- scripts/common.sh@352 -- # local d=2 00:07:29.105 04:22:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.105 04:22:32 -- scripts/common.sh@354 -- # echo 2 00:07:29.105 04:22:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:29.105 04:22:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:29.105 04:22:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:29.105 04:22:32 -- scripts/common.sh@367 -- # return 0 00:07:29.105 04:22:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.105 04:22:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.105 --rc genhtml_branch_coverage=1 00:07:29.105 --rc genhtml_function_coverage=1 00:07:29.105 --rc genhtml_legend=1 00:07:29.105 --rc geninfo_all_blocks=1 00:07:29.105 --rc geninfo_unexecuted_blocks=1 00:07:29.105 00:07:29.105 ' 00:07:29.105 04:22:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.105 --rc genhtml_branch_coverage=1 00:07:29.105 --rc genhtml_function_coverage=1 00:07:29.105 --rc genhtml_legend=1 00:07:29.105 --rc geninfo_all_blocks=1 00:07:29.105 --rc geninfo_unexecuted_blocks=1 00:07:29.105 00:07:29.105 ' 00:07:29.105 04:22:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.105 --rc genhtml_branch_coverage=1 00:07:29.105 --rc genhtml_function_coverage=1 00:07:29.105 --rc genhtml_legend=1 00:07:29.105 --rc geninfo_all_blocks=1 00:07:29.105 --rc geninfo_unexecuted_blocks=1 00:07:29.105 00:07:29.105 ' 00:07:29.105 04:22:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:29.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.105 --rc genhtml_branch_coverage=1 00:07:29.105 --rc genhtml_function_coverage=1 00:07:29.105 --rc genhtml_legend=1 00:07:29.105 --rc geninfo_all_blocks=1 00:07:29.105 --rc geninfo_unexecuted_blocks=1 00:07:29.105 00:07:29.105 ' 00:07:29.105 04:22:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.105 04:22:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.105 04:22:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.105 04:22:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.105 04:22:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.105 04:22:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.106 04:22:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.106 04:22:32 -- paths/export.sh@5 -- # export PATH 00:07:29.106 04:22:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.106 04:22:32 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:29.106 04:22:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.106 04:22:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.106 04:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.106 ************************************ 00:07:29.106 START TEST dd_uring_copy 00:07:29.106 ************************************ 00:07:29.106 04:22:32 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:07:29.106 04:22:32 -- dd/uring.sh@15 -- # local zram_dev_id 00:07:29.106 04:22:32 -- dd/uring.sh@16 -- # local magic 00:07:29.106 04:22:32 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:29.106 04:22:32 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:29.106 04:22:32 -- dd/uring.sh@19 -- # local verify_magic 00:07:29.106 04:22:32 -- dd/uring.sh@21 -- # init_zram 00:07:29.106 04:22:32 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:29.106 04:22:32 -- dd/common.sh@164 -- # return 00:07:29.106 04:22:32 -- dd/uring.sh@22 -- # create_zram_dev 00:07:29.106 04:22:32 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:29.106 04:22:32 -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:29.106 04:22:32 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:29.106 04:22:32 -- dd/common.sh@181 -- # local id=1 00:07:29.106 04:22:32 -- dd/common.sh@182 -- # local size=512M 00:07:29.106 04:22:32 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:29.106 04:22:32 -- dd/common.sh@186 -- # echo 512M 00:07:29.106 04:22:32 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:29.106 04:22:32 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:29.106 04:22:32 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:29.106 04:22:32 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:29.106 04:22:32 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:29.106 04:22:32 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:29.106 04:22:32 -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:29.106 04:22:32 -- dd/common.sh@98 -- # xtrace_disable 00:07:29.106 04:22:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.106 04:22:32 -- dd/uring.sh@41 -- # magic=m3azwzugqr03ggntksok13nd3mee1k42tdwnq6kocev00hk7x75md035gvzazx9xu1mzoydan3hvqwafq65g2mtc1pyjy6q79jmhf85wq0f0j5j9fj3j6wgeltkea4uzgwld3m7nmykc8xpint650hxezwzxqdrzkaxtxqtwl6iyrbigofaduhablvggu37zkbx0dujtkt2wkr9fvhdhvau698k5ltf29nevbpvl5kn5c15iy2cqjcvwu3e5yljzi6spzq03wa3dn2tzh5k203ffhjcjt3o1mb4xjpbgk3skj4l9g7z5csb5jvnaspzywpl22ntj79es4ub4u5eqnrmsem9hdjbpamus5y15pv0ehuaig22tyuceihmdm6txzrcokxc6bdzrzh8hma056k7swg13sbjwpwxz0iv9unojovq8hgivb3c6bfj7877jfxg6ekf8vpi5cp2gz4wi0oxepx3nhozzf09l9fsjlyav7kekmadsyerq1geiy5tqvvi1myymx88zr1ucn2b43bkuo33yg1xnw2chr3uspmutzjkc4eq32otjo5jrw7shu01f4ab87wigvz7xx1lyskq93x7uwxcsc1qw9a0zt5bo2tx2s0cq743b626s2srgke3rio7gptrfs6755mqqaeqd97tijdt57c7ji7c3r0h0u4xux3lgz6qn884gw69d1toob9qbc3votcffzy4k14w9d265rgwbdyd1eqqjv6hmat3173yf4iebtk95sayyf7bwdituztb0v80bk9fq0xkztf46um582cqg04bzajvuxfog5mppinowndvt5mneehxi94ehw4pi3zry56cw75pie9c5k9jdkq6dar365oko1dsj01bif1xzydxtcsqvdlpaqzq2ph2e6meuhtglymvf416m6ouyvxdwxxier4v7vxibpmpl8goozx3lmr9520oyy80lr6322j6m4o49f82xjsjuwg9mxj20vpfa1y56krm3meqi7e80urthi8no 00:07:29.106 04:22:32 -- dd/uring.sh@42 -- # echo m3azwzugqr03ggntksok13nd3mee1k42tdwnq6kocev00hk7x75md035gvzazx9xu1mzoydan3hvqwafq65g2mtc1pyjy6q79jmhf85wq0f0j5j9fj3j6wgeltkea4uzgwld3m7nmykc8xpint650hxezwzxqdrzkaxtxqtwl6iyrbigofaduhablvggu37zkbx0dujtkt2wkr9fvhdhvau698k5ltf29nevbpvl5kn5c15iy2cqjcvwu3e5yljzi6spzq03wa3dn2tzh5k203ffhjcjt3o1mb4xjpbgk3skj4l9g7z5csb5jvnaspzywpl22ntj79es4ub4u5eqnrmsem9hdjbpamus5y15pv0ehuaig22tyuceihmdm6txzrcokxc6bdzrzh8hma056k7swg13sbjwpwxz0iv9unojovq8hgivb3c6bfj7877jfxg6ekf8vpi5cp2gz4wi0oxepx3nhozzf09l9fsjlyav7kekmadsyerq1geiy5tqvvi1myymx88zr1ucn2b43bkuo33yg1xnw2chr3uspmutzjkc4eq32otjo5jrw7shu01f4ab87wigvz7xx1lyskq93x7uwxcsc1qw9a0zt5bo2tx2s0cq743b626s2srgke3rio7gptrfs6755mqqaeqd97tijdt57c7ji7c3r0h0u4xux3lgz6qn884gw69d1toob9qbc3votcffzy4k14w9d265rgwbdyd1eqqjv6hmat3173yf4iebtk95sayyf7bwdituztb0v80bk9fq0xkztf46um582cqg04bzajvuxfog5mppinowndvt5mneehxi94ehw4pi3zry56cw75pie9c5k9jdkq6dar365oko1dsj01bif1xzydxtcsqvdlpaqzq2ph2e6meuhtglymvf416m6ouyvxdwxxier4v7vxibpmpl8goozx3lmr9520oyy80lr6322j6m4o49f82xjsjuwg9mxj20vpfa1y56krm3meqi7e80urthi8no 00:07:29.106 04:22:32 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:29.364 [2024-12-07 04:22:32.372119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.365 [2024-12-07 04:22:32.372376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58987 ] 00:07:29.365 [2024-12-07 04:22:32.503172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.365 [2024-12-07 04:22:32.550336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.933  [2024-12-07T04:22:33.432Z] Copying: 511/511 [MB] (average 1809 MBps) 00:07:30.192 00:07:30.192 04:22:33 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:30.192 04:22:33 -- dd/uring.sh@54 -- # gen_conf 00:07:30.192 04:22:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:30.192 04:22:33 -- common/autotest_common.sh@10 -- # set +x 00:07:30.192 { 00:07:30.192 "subsystems": [ 00:07:30.192 { 00:07:30.192 "subsystem": "bdev", 00:07:30.192 "config": [ 00:07:30.192 { 00:07:30.192 "params": { 00:07:30.192 "block_size": 512, 00:07:30.192 "num_blocks": 1048576, 00:07:30.192 "name": "malloc0" 00:07:30.192 }, 00:07:30.192 "method": "bdev_malloc_create" 00:07:30.192 }, 00:07:30.192 { 00:07:30.192 "params": { 00:07:30.192 "filename": "/dev/zram1", 00:07:30.192 "name": "uring0" 00:07:30.192 }, 00:07:30.192 "method": "bdev_uring_create" 00:07:30.192 }, 00:07:30.192 { 00:07:30.192 "method": "bdev_wait_for_examine" 00:07:30.192 } 00:07:30.192 ] 00:07:30.192 } 00:07:30.192 ] 00:07:30.192 } 00:07:30.192 [2024-12-07 04:22:33.287218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.192 [2024-12-07 04:22:33.287308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59001 ] 00:07:30.192 [2024-12-07 04:22:33.426485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.451 [2024-12-07 04:22:33.487516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.831  [2024-12-07T04:22:36.011Z] Copying: 230/512 [MB] (230 MBps) [2024-12-07T04:22:36.011Z] Copying: 476/512 [MB] (246 MBps) [2024-12-07T04:22:36.271Z] Copying: 512/512 [MB] (average 237 MBps) 00:07:33.031 00:07:33.031 04:22:36 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:33.031 04:22:36 -- dd/uring.sh@60 -- # gen_conf 00:07:33.031 04:22:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:33.031 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:07:33.031 [2024-12-07 04:22:36.164354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.031 [2024-12-07 04:22:36.164454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:07:33.031 { 00:07:33.031 "subsystems": [ 00:07:33.031 { 00:07:33.031 "subsystem": "bdev", 00:07:33.031 "config": [ 00:07:33.031 { 00:07:33.031 "params": { 00:07:33.031 "block_size": 512, 00:07:33.031 "num_blocks": 1048576, 00:07:33.031 "name": "malloc0" 00:07:33.031 }, 00:07:33.031 "method": "bdev_malloc_create" 00:07:33.031 }, 00:07:33.031 { 00:07:33.031 "params": { 00:07:33.031 "filename": "/dev/zram1", 00:07:33.031 "name": "uring0" 00:07:33.031 }, 00:07:33.031 "method": "bdev_uring_create" 00:07:33.031 }, 00:07:33.031 { 00:07:33.031 "method": "bdev_wait_for_examine" 00:07:33.031 } 00:07:33.031 ] 00:07:33.031 } 00:07:33.031 ] 00:07:33.031 } 00:07:33.290 [2024-12-07 04:22:36.304793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.290 [2024-12-07 04:22:36.366202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.666  [2024-12-07T04:22:38.842Z] Copying: 132/512 [MB] (132 MBps) [2024-12-07T04:22:39.778Z] Copying: 266/512 [MB] (134 MBps) [2024-12-07T04:22:40.345Z] Copying: 390/512 [MB] (124 MBps) [2024-12-07T04:22:40.603Z] Copying: 512/512 [MB] (average 135 MBps) 00:07:37.363 00:07:37.363 04:22:40 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:37.363 04:22:40 -- dd/uring.sh@66 -- # [[ m3azwzugqr03ggntksok13nd3mee1k42tdwnq6kocev00hk7x75md035gvzazx9xu1mzoydan3hvqwafq65g2mtc1pyjy6q79jmhf85wq0f0j5j9fj3j6wgeltkea4uzgwld3m7nmykc8xpint650hxezwzxqdrzkaxtxqtwl6iyrbigofaduhablvggu37zkbx0dujtkt2wkr9fvhdhvau698k5ltf29nevbpvl5kn5c15iy2cqjcvwu3e5yljzi6spzq03wa3dn2tzh5k203ffhjcjt3o1mb4xjpbgk3skj4l9g7z5csb5jvnaspzywpl22ntj79es4ub4u5eqnrmsem9hdjbpamus5y15pv0ehuaig22tyuceihmdm6txzrcokxc6bdzrzh8hma056k7swg13sbjwpwxz0iv9unojovq8hgivb3c6bfj7877jfxg6ekf8vpi5cp2gz4wi0oxepx3nhozzf09l9fsjlyav7kekmadsyerq1geiy5tqvvi1myymx88zr1ucn2b43bkuo33yg1xnw2chr3uspmutzjkc4eq32otjo5jrw7shu01f4ab87wigvz7xx1lyskq93x7uwxcsc1qw9a0zt5bo2tx2s0cq743b626s2srgke3rio7gptrfs6755mqqaeqd97tijdt57c7ji7c3r0h0u4xux3lgz6qn884gw69d1toob9qbc3votcffzy4k14w9d265rgwbdyd1eqqjv6hmat3173yf4iebtk95sayyf7bwdituztb0v80bk9fq0xkztf46um582cqg04bzajvuxfog5mppinowndvt5mneehxi94ehw4pi3zry56cw75pie9c5k9jdkq6dar365oko1dsj01bif1xzydxtcsqvdlpaqzq2ph2e6meuhtglymvf416m6ouyvxdwxxier4v7vxibpmpl8goozx3lmr9520oyy80lr6322j6m4o49f82xjsjuwg9mxj20vpfa1y56krm3meqi7e80urthi8no == \m\3\a\z\w\z\u\g\q\r\0\3\g\g\n\t\k\s\o\k\1\3\n\d\3\m\e\e\1\k\4\2\t\d\w\n\q\6\k\o\c\e\v\0\0\h\k\7\x\7\5\m\d\0\3\5\g\v\z\a\z\x\9\x\u\1\m\z\o\y\d\a\n\3\h\v\q\w\a\f\q\6\5\g\2\m\t\c\1\p\y\j\y\6\q\7\9\j\m\h\f\8\5\w\q\0\f\0\j\5\j\9\f\j\3\j\6\w\g\e\l\t\k\e\a\4\u\z\g\w\l\d\3\m\7\n\m\y\k\c\8\x\p\i\n\t\6\5\0\h\x\e\z\w\z\x\q\d\r\z\k\a\x\t\x\q\t\w\l\6\i\y\r\b\i\g\o\f\a\d\u\h\a\b\l\v\g\g\u\3\7\z\k\b\x\0\d\u\j\t\k\t\2\w\k\r\9\f\v\h\d\h\v\a\u\6\9\8\k\5\l\t\f\2\9\n\e\v\b\p\v\l\5\k\n\5\c\1\5\i\y\2\c\q\j\c\v\w\u\3\e\5\y\l\j\z\i\6\s\p\z\q\0\3\w\a\3\d\n\2\t\z\h\5\k\2\0\3\f\f\h\j\c\j\t\3\o\1\m\b\4\x\j\p\b\g\k\3\s\k\j\4\l\9\g\7\z\5\c\s\b\5\j\v\n\a\s\p\z\y\w\p\l\2\2\n\t\j\7\9\e\s\4\u\b\4\u\5\e\q\n\r\m\s\e\m\9\h\d\j\b\p\a\m\u\s\5\y\1\5\p\v\0\e\h\u\a\i\g\2\2\t\y\u\c\e\i\h\m\d\m\6\t\x\z\r\c\o\k\x\c\6\b\d\z\r\z\h\8\h\m\a\0\5\6\k\7\s\w\g\1\3\s\b\j\w\p\w\x\z\0\i\v\9\u\n\o\j\o\v\q\8\h\g\i\v\b\3\c\6\b\f\j\7\8\7\7\j\f\x\g\6\e\k\f\8\v\p\i\5\c\p\2\g\z\4\w\i\0\o\x\e\p\x\3\n\h\o\z\z\f\0\9\l\9\f\s\j\l\y\a\v\7\k\e\k\m\a\d\s\y\e\r\q\1\g\e\i\y\5\t\q\v\v\i\1\m\y\y\m\x\8\8\z\r\1\u\c\n\2\b\4\3\b\k\u\o\3\3\y\g\1\x\n\w\2\c\h\r\3\u\s\p\m\u\t\z\j\k\c\4\e\q\3\2\o\t\j\o\5\j\r\w\7\s\h\u\0\1\f\4\a\b\8\7\w\i\g\v\z\7\x\x\1\l\y\s\k\q\9\3\x\7\u\w\x\c\s\c\1\q\w\9\a\0\z\t\5\b\o\2\t\x\2\s\0\c\q\7\4\3\b\6\2\6\s\2\s\r\g\k\e\3\r\i\o\7\g\p\t\r\f\s\6\7\5\5\m\q\q\a\e\q\d\9\7\t\i\j\d\t\5\7\c\7\j\i\7\c\3\r\0\h\0\u\4\x\u\x\3\l\g\z\6\q\n\8\8\4\g\w\6\9\d\1\t\o\o\b\9\q\b\c\3\v\o\t\c\f\f\z\y\4\k\1\4\w\9\d\2\6\5\r\g\w\b\d\y\d\1\e\q\q\j\v\6\h\m\a\t\3\1\7\3\y\f\4\i\e\b\t\k\9\5\s\a\y\y\f\7\b\w\d\i\t\u\z\t\b\0\v\8\0\b\k\9\f\q\0\x\k\z\t\f\4\6\u\m\5\8\2\c\q\g\0\4\b\z\a\j\v\u\x\f\o\g\5\m\p\p\i\n\o\w\n\d\v\t\5\m\n\e\e\h\x\i\9\4\e\h\w\4\p\i\3\z\r\y\5\6\c\w\7\5\p\i\e\9\c\5\k\9\j\d\k\q\6\d\a\r\3\6\5\o\k\o\1\d\s\j\0\1\b\i\f\1\x\z\y\d\x\t\c\s\q\v\d\l\p\a\q\z\q\2\p\h\2\e\6\m\e\u\h\t\g\l\y\m\v\f\4\1\6\m\6\o\u\y\v\x\d\w\x\x\i\e\r\4\v\7\v\x\i\b\p\m\p\l\8\g\o\o\z\x\3\l\m\r\9\5\2\0\o\y\y\8\0\l\r\6\3\2\2\j\6\m\4\o\4\9\f\8\2\x\j\s\j\u\w\g\9\m\x\j\2\0\v\p\f\a\1\y\5\6\k\r\m\3\m\e\q\i\7\e\8\0\u\r\t\h\i\8\n\o ]] 00:07:37.363 04:22:40 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:37.363 04:22:40 -- dd/uring.sh@69 -- # [[ m3azwzugqr03ggntksok13nd3mee1k42tdwnq6kocev00hk7x75md035gvzazx9xu1mzoydan3hvqwafq65g2mtc1pyjy6q79jmhf85wq0f0j5j9fj3j6wgeltkea4uzgwld3m7nmykc8xpint650hxezwzxqdrzkaxtxqtwl6iyrbigofaduhablvggu37zkbx0dujtkt2wkr9fvhdhvau698k5ltf29nevbpvl5kn5c15iy2cqjcvwu3e5yljzi6spzq03wa3dn2tzh5k203ffhjcjt3o1mb4xjpbgk3skj4l9g7z5csb5jvnaspzywpl22ntj79es4ub4u5eqnrmsem9hdjbpamus5y15pv0ehuaig22tyuceihmdm6txzrcokxc6bdzrzh8hma056k7swg13sbjwpwxz0iv9unojovq8hgivb3c6bfj7877jfxg6ekf8vpi5cp2gz4wi0oxepx3nhozzf09l9fsjlyav7kekmadsyerq1geiy5tqvvi1myymx88zr1ucn2b43bkuo33yg1xnw2chr3uspmutzjkc4eq32otjo5jrw7shu01f4ab87wigvz7xx1lyskq93x7uwxcsc1qw9a0zt5bo2tx2s0cq743b626s2srgke3rio7gptrfs6755mqqaeqd97tijdt57c7ji7c3r0h0u4xux3lgz6qn884gw69d1toob9qbc3votcffzy4k14w9d265rgwbdyd1eqqjv6hmat3173yf4iebtk95sayyf7bwdituztb0v80bk9fq0xkztf46um582cqg04bzajvuxfog5mppinowndvt5mneehxi94ehw4pi3zry56cw75pie9c5k9jdkq6dar365oko1dsj01bif1xzydxtcsqvdlpaqzq2ph2e6meuhtglymvf416m6ouyvxdwxxier4v7vxibpmpl8goozx3lmr9520oyy80lr6322j6m4o49f82xjsjuwg9mxj20vpfa1y56krm3meqi7e80urthi8no == \m\3\a\z\w\z\u\g\q\r\0\3\g\g\n\t\k\s\o\k\1\3\n\d\3\m\e\e\1\k\4\2\t\d\w\n\q\6\k\o\c\e\v\0\0\h\k\7\x\7\5\m\d\0\3\5\g\v\z\a\z\x\9\x\u\1\m\z\o\y\d\a\n\3\h\v\q\w\a\f\q\6\5\g\2\m\t\c\1\p\y\j\y\6\q\7\9\j\m\h\f\8\5\w\q\0\f\0\j\5\j\9\f\j\3\j\6\w\g\e\l\t\k\e\a\4\u\z\g\w\l\d\3\m\7\n\m\y\k\c\8\x\p\i\n\t\6\5\0\h\x\e\z\w\z\x\q\d\r\z\k\a\x\t\x\q\t\w\l\6\i\y\r\b\i\g\o\f\a\d\u\h\a\b\l\v\g\g\u\3\7\z\k\b\x\0\d\u\j\t\k\t\2\w\k\r\9\f\v\h\d\h\v\a\u\6\9\8\k\5\l\t\f\2\9\n\e\v\b\p\v\l\5\k\n\5\c\1\5\i\y\2\c\q\j\c\v\w\u\3\e\5\y\l\j\z\i\6\s\p\z\q\0\3\w\a\3\d\n\2\t\z\h\5\k\2\0\3\f\f\h\j\c\j\t\3\o\1\m\b\4\x\j\p\b\g\k\3\s\k\j\4\l\9\g\7\z\5\c\s\b\5\j\v\n\a\s\p\z\y\w\p\l\2\2\n\t\j\7\9\e\s\4\u\b\4\u\5\e\q\n\r\m\s\e\m\9\h\d\j\b\p\a\m\u\s\5\y\1\5\p\v\0\e\h\u\a\i\g\2\2\t\y\u\c\e\i\h\m\d\m\6\t\x\z\r\c\o\k\x\c\6\b\d\z\r\z\h\8\h\m\a\0\5\6\k\7\s\w\g\1\3\s\b\j\w\p\w\x\z\0\i\v\9\u\n\o\j\o\v\q\8\h\g\i\v\b\3\c\6\b\f\j\7\8\7\7\j\f\x\g\6\e\k\f\8\v\p\i\5\c\p\2\g\z\4\w\i\0\o\x\e\p\x\3\n\h\o\z\z\f\0\9\l\9\f\s\j\l\y\a\v\7\k\e\k\m\a\d\s\y\e\r\q\1\g\e\i\y\5\t\q\v\v\i\1\m\y\y\m\x\8\8\z\r\1\u\c\n\2\b\4\3\b\k\u\o\3\3\y\g\1\x\n\w\2\c\h\r\3\u\s\p\m\u\t\z\j\k\c\4\e\q\3\2\o\t\j\o\5\j\r\w\7\s\h\u\0\1\f\4\a\b\8\7\w\i\g\v\z\7\x\x\1\l\y\s\k\q\9\3\x\7\u\w\x\c\s\c\1\q\w\9\a\0\z\t\5\b\o\2\t\x\2\s\0\c\q\7\4\3\b\6\2\6\s\2\s\r\g\k\e\3\r\i\o\7\g\p\t\r\f\s\6\7\5\5\m\q\q\a\e\q\d\9\7\t\i\j\d\t\5\7\c\7\j\i\7\c\3\r\0\h\0\u\4\x\u\x\3\l\g\z\6\q\n\8\8\4\g\w\6\9\d\1\t\o\o\b\9\q\b\c\3\v\o\t\c\f\f\z\y\4\k\1\4\w\9\d\2\6\5\r\g\w\b\d\y\d\1\e\q\q\j\v\6\h\m\a\t\3\1\7\3\y\f\4\i\e\b\t\k\9\5\s\a\y\y\f\7\b\w\d\i\t\u\z\t\b\0\v\8\0\b\k\9\f\q\0\x\k\z\t\f\4\6\u\m\5\8\2\c\q\g\0\4\b\z\a\j\v\u\x\f\o\g\5\m\p\p\i\n\o\w\n\d\v\t\5\m\n\e\e\h\x\i\9\4\e\h\w\4\p\i\3\z\r\y\5\6\c\w\7\5\p\i\e\9\c\5\k\9\j\d\k\q\6\d\a\r\3\6\5\o\k\o\1\d\s\j\0\1\b\i\f\1\x\z\y\d\x\t\c\s\q\v\d\l\p\a\q\z\q\2\p\h\2\e\6\m\e\u\h\t\g\l\y\m\v\f\4\1\6\m\6\o\u\y\v\x\d\w\x\x\i\e\r\4\v\7\v\x\i\b\p\m\p\l\8\g\o\o\z\x\3\l\m\r\9\5\2\0\o\y\y\8\0\l\r\6\3\2\2\j\6\m\4\o\4\9\f\8\2\x\j\s\j\u\w\g\9\m\x\j\2\0\v\p\f\a\1\y\5\6\k\r\m\3\m\e\q\i\7\e\8\0\u\r\t\h\i\8\n\o ]] 00:07:37.363 04:22:40 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:37.931 04:22:40 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:37.931 04:22:40 -- dd/uring.sh@75 -- # gen_conf 00:07:37.931 04:22:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:37.931 04:22:40 -- common/autotest_common.sh@10 -- # set +x 00:07:37.931 [2024-12-07 04:22:40.957617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.931 [2024-12-07 04:22:40.957897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59122 ] 00:07:37.931 { 00:07:37.931 "subsystems": [ 00:07:37.931 { 00:07:37.931 "subsystem": "bdev", 00:07:37.931 "config": [ 00:07:37.931 { 00:07:37.931 "params": { 00:07:37.931 "block_size": 512, 00:07:37.931 "num_blocks": 1048576, 00:07:37.931 "name": "malloc0" 00:07:37.931 }, 00:07:37.931 "method": "bdev_malloc_create" 00:07:37.931 }, 00:07:37.931 { 00:07:37.931 "params": { 00:07:37.931 "filename": "/dev/zram1", 00:07:37.931 "name": "uring0" 00:07:37.931 }, 00:07:37.931 "method": "bdev_uring_create" 00:07:37.931 }, 00:07:37.931 { 00:07:37.931 "method": "bdev_wait_for_examine" 00:07:37.931 } 00:07:37.931 ] 00:07:37.931 } 00:07:37.931 ] 00:07:37.931 } 00:07:37.931 [2024-12-07 04:22:41.098165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.190 [2024-12-07 04:22:41.170432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.201  [2024-12-07T04:22:43.375Z] Copying: 150/512 [MB] (150 MBps) [2024-12-07T04:22:44.752Z] Copying: 320/512 [MB] (170 MBps) [2024-12-07T04:22:44.752Z] Copying: 465/512 [MB] (145 MBps) [2024-12-07T04:22:45.010Z] Copying: 512/512 [MB] (average 152 MBps) 00:07:41.770 00:07:41.770 04:22:44 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:41.770 04:22:44 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:41.770 04:22:44 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:41.770 04:22:44 -- dd/uring.sh@87 -- # : 00:07:41.770 04:22:44 -- dd/uring.sh@87 -- # : 00:07:41.770 04:22:44 -- dd/uring.sh@87 -- # gen_conf 00:07:41.770 04:22:44 -- dd/common.sh@31 -- # xtrace_disable 00:07:41.770 04:22:44 -- common/autotest_common.sh@10 -- # set +x 00:07:42.032 [2024-12-07 04:22:45.014556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.032 [2024-12-07 04:22:45.014694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59180 ] 00:07:42.032 { 00:07:42.032 "subsystems": [ 00:07:42.032 { 00:07:42.032 "subsystem": "bdev", 00:07:42.032 "config": [ 00:07:42.032 { 00:07:42.032 "params": { 00:07:42.032 "block_size": 512, 00:07:42.032 "num_blocks": 1048576, 00:07:42.032 "name": "malloc0" 00:07:42.032 }, 00:07:42.032 "method": "bdev_malloc_create" 00:07:42.032 }, 00:07:42.032 { 00:07:42.032 "params": { 00:07:42.032 "filename": "/dev/zram1", 00:07:42.032 "name": "uring0" 00:07:42.032 }, 00:07:42.032 "method": "bdev_uring_create" 00:07:42.032 }, 00:07:42.032 { 00:07:42.032 "params": { 00:07:42.032 "name": "uring0" 00:07:42.032 }, 00:07:42.032 "method": "bdev_uring_delete" 00:07:42.032 }, 00:07:42.032 { 00:07:42.032 "method": "bdev_wait_for_examine" 00:07:42.032 } 00:07:42.032 ] 00:07:42.032 } 00:07:42.032 ] 00:07:42.032 } 00:07:42.032 [2024-12-07 04:22:45.150114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.032 [2024-12-07 04:22:45.198588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.292  [2024-12-07T04:22:45.790Z] Copying: 0/0 [B] (average 0 Bps) 00:07:42.550 00:07:42.550 04:22:45 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.550 04:22:45 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.550 04:22:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.550 04:22:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.550 04:22:45 -- dd/uring.sh@94 -- # : 00:07:42.550 04:22:45 -- dd/uring.sh@94 -- # gen_conf 00:07:42.550 04:22:45 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.550 04:22:45 -- common/autotest_common.sh@10 -- # set +x 00:07:42.550 04:22:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.550 04:22:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.550 04:22:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.550 04:22:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.550 04:22:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.550 04:22:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.550 04:22:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.550 04:22:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.550 [2024-12-07 04:22:45.666298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.550 [2024-12-07 04:22:45.666573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59203 ] 00:07:42.550 { 00:07:42.550 "subsystems": [ 00:07:42.550 { 00:07:42.550 "subsystem": "bdev", 00:07:42.550 "config": [ 00:07:42.550 { 00:07:42.550 "params": { 00:07:42.550 "block_size": 512, 00:07:42.550 "num_blocks": 1048576, 00:07:42.550 "name": "malloc0" 00:07:42.550 }, 00:07:42.550 "method": "bdev_malloc_create" 00:07:42.550 }, 00:07:42.550 { 00:07:42.550 "params": { 00:07:42.550 "filename": "/dev/zram1", 00:07:42.550 "name": "uring0" 00:07:42.550 }, 00:07:42.550 "method": "bdev_uring_create" 00:07:42.550 }, 00:07:42.550 { 00:07:42.550 "params": { 00:07:42.550 "name": "uring0" 00:07:42.550 }, 00:07:42.550 "method": "bdev_uring_delete" 00:07:42.550 }, 00:07:42.550 { 00:07:42.550 "method": "bdev_wait_for_examine" 00:07:42.550 } 00:07:42.550 ] 00:07:42.550 } 00:07:42.550 ] 00:07:42.550 } 00:07:42.809 [2024-12-07 04:22:45.803937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.809 [2024-12-07 04:22:45.852719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.809 [2024-12-07 04:22:46.004777] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:42.809 [2024-12-07 04:22:46.004824] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:42.809 [2024-12-07 04:22:46.004851] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:07:42.809 [2024-12-07 04:22:46.004859] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.068 [2024-12-07 04:22:46.167584] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:43.068 04:22:46 -- common/autotest_common.sh@653 -- # es=237 00:07:43.068 04:22:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.068 04:22:46 -- common/autotest_common.sh@662 -- # es=109 00:07:43.068 04:22:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.068 04:22:46 -- common/autotest_common.sh@670 -- # es=1 00:07:43.068 04:22:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.068 04:22:46 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:43.068 04:22:46 -- dd/common.sh@172 -- # local id=1 00:07:43.068 04:22:46 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:43.068 04:22:46 -- dd/common.sh@176 -- # echo 1 00:07:43.068 04:22:46 -- dd/common.sh@177 -- # echo 1 00:07:43.068 04:22:46 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:43.634 00:07:43.634 ************************************ 00:07:43.634 END TEST dd_uring_copy 00:07:43.634 ************************************ 00:07:43.634 real 0m14.296s 00:07:43.634 user 0m8.169s 00:07:43.634 sys 0m5.472s 00:07:43.634 04:22:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.634 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:07:43.634 ************************************ 00:07:43.634 END TEST spdk_dd_uring 00:07:43.634 ************************************ 00:07:43.634 00:07:43.634 real 0m14.527s 00:07:43.634 user 0m8.303s 00:07:43.634 sys 0m5.574s 00:07:43.634 04:22:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.634 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:07:43.634 04:22:46 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:43.634 04:22:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:43.634 04:22:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.634 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:07:43.634 ************************************ 00:07:43.634 START TEST spdk_dd_sparse 00:07:43.634 ************************************ 00:07:43.634 04:22:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:43.634 * Looking for test storage... 00:07:43.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.634 04:22:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:43.634 04:22:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:43.634 04:22:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:43.634 04:22:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:43.634 04:22:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:43.634 04:22:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:43.634 04:22:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:43.634 04:22:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:43.634 04:22:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:43.634 04:22:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.634 04:22:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:43.634 04:22:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:43.634 04:22:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:43.634 04:22:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:43.634 04:22:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:43.634 04:22:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:43.634 04:22:46 -- scripts/common.sh@344 -- # : 1 00:07:43.634 04:22:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:43.634 04:22:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.634 04:22:46 -- scripts/common.sh@364 -- # decimal 1 00:07:43.634 04:22:46 -- scripts/common.sh@352 -- # local d=1 00:07:43.634 04:22:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.634 04:22:46 -- scripts/common.sh@354 -- # echo 1 00:07:43.634 04:22:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:43.634 04:22:46 -- scripts/common.sh@365 -- # decimal 2 00:07:43.634 04:22:46 -- scripts/common.sh@352 -- # local d=2 00:07:43.634 04:22:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.634 04:22:46 -- scripts/common.sh@354 -- # echo 2 00:07:43.634 04:22:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:43.634 04:22:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:43.634 04:22:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:43.634 04:22:46 -- scripts/common.sh@367 -- # return 0 00:07:43.634 04:22:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.634 04:22:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.634 --rc genhtml_branch_coverage=1 00:07:43.634 --rc genhtml_function_coverage=1 00:07:43.634 --rc genhtml_legend=1 00:07:43.634 --rc geninfo_all_blocks=1 00:07:43.634 --rc geninfo_unexecuted_blocks=1 00:07:43.634 00:07:43.634 ' 00:07:43.634 04:22:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.634 --rc genhtml_branch_coverage=1 00:07:43.634 --rc genhtml_function_coverage=1 00:07:43.634 --rc genhtml_legend=1 00:07:43.634 --rc geninfo_all_blocks=1 00:07:43.634 --rc geninfo_unexecuted_blocks=1 00:07:43.634 00:07:43.634 ' 00:07:43.634 04:22:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.634 --rc genhtml_branch_coverage=1 00:07:43.634 --rc genhtml_function_coverage=1 00:07:43.634 --rc genhtml_legend=1 00:07:43.634 --rc geninfo_all_blocks=1 00:07:43.634 --rc geninfo_unexecuted_blocks=1 00:07:43.634 00:07:43.634 ' 00:07:43.634 04:22:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.634 --rc genhtml_branch_coverage=1 00:07:43.634 --rc genhtml_function_coverage=1 00:07:43.634 --rc genhtml_legend=1 00:07:43.634 --rc geninfo_all_blocks=1 00:07:43.634 --rc geninfo_unexecuted_blocks=1 00:07:43.634 00:07:43.634 ' 00:07:43.634 04:22:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.634 04:22:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.634 04:22:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.634 04:22:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.634 04:22:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.634 04:22:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.635 04:22:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.635 04:22:46 -- paths/export.sh@5 -- # export PATH 00:07:43.635 04:22:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.635 04:22:46 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:43.635 04:22:46 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:43.635 04:22:46 -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:43.635 04:22:46 -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:43.635 04:22:46 -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:43.635 04:22:46 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:43.635 04:22:46 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:43.635 04:22:46 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:43.635 04:22:46 -- dd/sparse.sh@118 -- # prepare 00:07:43.635 04:22:46 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:43.635 04:22:46 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:43.635 1+0 records in 00:07:43.635 1+0 records out 00:07:43.635 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00532595 s, 788 MB/s 00:07:43.635 04:22:46 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:43.893 1+0 records in 00:07:43.893 1+0 records out 00:07:43.893 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00677104 s, 619 MB/s 00:07:43.893 04:22:46 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:43.893 1+0 records in 00:07:43.893 1+0 records out 00:07:43.893 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00528235 s, 794 MB/s 00:07:43.893 04:22:46 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:43.893 04:22:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:43.893 04:22:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.893 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:07:43.893 ************************************ 00:07:43.893 START TEST dd_sparse_file_to_file 00:07:43.893 ************************************ 00:07:43.893 04:22:46 -- common/autotest_common.sh@1114 -- # file_to_file 00:07:43.893 04:22:46 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:43.893 04:22:46 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:43.893 04:22:46 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:43.893 04:22:46 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:43.893 04:22:46 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:43.893 04:22:46 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:43.893 04:22:46 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:43.893 04:22:46 -- dd/sparse.sh@41 -- # gen_conf 00:07:43.893 04:22:46 -- dd/common.sh@31 -- # xtrace_disable 00:07:43.893 04:22:46 -- common/autotest_common.sh@10 -- # set +x 00:07:43.893 [2024-12-07 04:22:46.946903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.893 [2024-12-07 04:22:46.947129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:07:43.893 { 00:07:43.893 "subsystems": [ 00:07:43.893 { 00:07:43.893 "subsystem": "bdev", 00:07:43.893 "config": [ 00:07:43.893 { 00:07:43.893 "params": { 00:07:43.893 "block_size": 4096, 00:07:43.893 "filename": "dd_sparse_aio_disk", 00:07:43.893 "name": "dd_aio" 00:07:43.893 }, 00:07:43.893 "method": "bdev_aio_create" 00:07:43.893 }, 00:07:43.893 { 00:07:43.893 "params": { 00:07:43.893 "lvs_name": "dd_lvstore", 00:07:43.893 "bdev_name": "dd_aio" 00:07:43.893 }, 00:07:43.893 "method": "bdev_lvol_create_lvstore" 00:07:43.893 }, 00:07:43.893 { 00:07:43.893 "method": "bdev_wait_for_examine" 00:07:43.893 } 00:07:43.893 ] 00:07:43.893 } 00:07:43.893 ] 00:07:43.893 } 00:07:43.893 [2024-12-07 04:22:47.080339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.893 [2024-12-07 04:22:47.129939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.151  [2024-12-07T04:22:47.649Z] Copying: 12/36 [MB] (average 1714 MBps) 00:07:44.409 00:07:44.409 04:22:47 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:44.409 04:22:47 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:44.409 04:22:47 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:44.409 04:22:47 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:44.409 04:22:47 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:44.409 04:22:47 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:44.409 04:22:47 -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:44.409 04:22:47 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:44.409 ************************************ 00:07:44.409 END TEST dd_sparse_file_to_file 00:07:44.409 ************************************ 00:07:44.409 04:22:47 -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:44.409 04:22:47 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:44.409 00:07:44.409 real 0m0.600s 00:07:44.409 user 0m0.354s 00:07:44.409 sys 0m0.129s 00:07:44.409 04:22:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.409 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.409 04:22:47 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:44.409 04:22:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:44.409 04:22:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.409 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.409 ************************************ 00:07:44.409 START TEST dd_sparse_file_to_bdev 00:07:44.409 ************************************ 00:07:44.409 04:22:47 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:07:44.409 04:22:47 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:44.409 04:22:47 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:44.409 04:22:47 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:07:44.409 04:22:47 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:44.409 04:22:47 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:44.409 04:22:47 -- dd/sparse.sh@73 -- # gen_conf 00:07:44.409 04:22:47 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.409 04:22:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.409 [2024-12-07 04:22:47.603139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.409 [2024-12-07 04:22:47.603229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59342 ] 00:07:44.409 { 00:07:44.409 "subsystems": [ 00:07:44.409 { 00:07:44.409 "subsystem": "bdev", 00:07:44.409 "config": [ 00:07:44.409 { 00:07:44.409 "params": { 00:07:44.409 "block_size": 4096, 00:07:44.409 "filename": "dd_sparse_aio_disk", 00:07:44.409 "name": "dd_aio" 00:07:44.409 }, 00:07:44.409 "method": "bdev_aio_create" 00:07:44.409 }, 00:07:44.409 { 00:07:44.409 "params": { 00:07:44.409 "lvs_name": "dd_lvstore", 00:07:44.409 "lvol_name": "dd_lvol", 00:07:44.409 "size": 37748736, 00:07:44.409 "thin_provision": true 00:07:44.409 }, 00:07:44.409 "method": "bdev_lvol_create" 00:07:44.409 }, 00:07:44.409 { 00:07:44.409 "method": "bdev_wait_for_examine" 00:07:44.409 } 00:07:44.409 ] 00:07:44.409 } 00:07:44.409 ] 00:07:44.409 } 00:07:44.667 [2024-12-07 04:22:47.742332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.667 [2024-12-07 04:22:47.802667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.667 [2024-12-07 04:22:47.866632] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:07:44.926  [2024-12-07T04:22:48.166Z] Copying: 12/36 [MB] (average 363 MBps)[2024-12-07 04:22:47.916730] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:07:44.926 00:07:44.926 00:07:44.926 00:07:44.926 real 0m0.603s 00:07:44.926 user 0m0.387s 00:07:44.926 sys 0m0.137s 00:07:44.926 04:22:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.926 04:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:44.926 ************************************ 00:07:44.926 END TEST dd_sparse_file_to_bdev 00:07:44.926 ************************************ 00:07:45.184 04:22:48 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:45.184 04:22:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:45.184 04:22:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.184 04:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:45.184 ************************************ 00:07:45.184 START TEST dd_sparse_bdev_to_file 00:07:45.184 ************************************ 00:07:45.184 04:22:48 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:07:45.184 04:22:48 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:45.184 04:22:48 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:45.184 04:22:48 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:45.184 04:22:48 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:45.184 04:22:48 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:45.184 04:22:48 -- dd/sparse.sh@91 -- # gen_conf 00:07:45.184 04:22:48 -- dd/common.sh@31 -- # xtrace_disable 00:07:45.184 04:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:45.184 [2024-12-07 04:22:48.257167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.184 [2024-12-07 04:22:48.257257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59379 ] 00:07:45.184 { 00:07:45.184 "subsystems": [ 00:07:45.184 { 00:07:45.184 "subsystem": "bdev", 00:07:45.184 "config": [ 00:07:45.184 { 00:07:45.184 "params": { 00:07:45.184 "block_size": 4096, 00:07:45.185 "filename": "dd_sparse_aio_disk", 00:07:45.185 "name": "dd_aio" 00:07:45.185 }, 00:07:45.185 "method": "bdev_aio_create" 00:07:45.185 }, 00:07:45.185 { 00:07:45.185 "method": "bdev_wait_for_examine" 00:07:45.185 } 00:07:45.185 ] 00:07:45.185 } 00:07:45.185 ] 00:07:45.185 } 00:07:45.185 [2024-12-07 04:22:48.397759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.443 [2024-12-07 04:22:48.460631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.443  [2024-12-07T04:22:48.942Z] Copying: 12/36 [MB] (average 1333 MBps) 00:07:45.702 00:07:45.702 04:22:48 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:45.702 04:22:48 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:45.702 04:22:48 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:45.702 04:22:48 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:45.702 04:22:48 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:45.702 04:22:48 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:45.702 04:22:48 -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:45.702 04:22:48 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:45.702 04:22:48 -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:45.702 04:22:48 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:45.702 00:07:45.702 real 0m0.590s 00:07:45.702 user 0m0.366s 00:07:45.702 sys 0m0.142s 00:07:45.702 04:22:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.702 ************************************ 00:07:45.702 END TEST dd_sparse_bdev_to_file 00:07:45.702 ************************************ 00:07:45.702 04:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:45.702 04:22:48 -- dd/sparse.sh@1 -- # cleanup 00:07:45.702 04:22:48 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:45.702 04:22:48 -- dd/sparse.sh@12 -- # rm file_zero1 00:07:45.702 04:22:48 -- dd/sparse.sh@13 -- # rm file_zero2 00:07:45.702 04:22:48 -- dd/sparse.sh@14 -- # rm file_zero3 00:07:45.702 ************************************ 00:07:45.702 END TEST spdk_dd_sparse 00:07:45.702 ************************************ 00:07:45.702 00:07:45.702 real 0m2.176s 00:07:45.702 user 0m1.280s 00:07:45.702 sys 0m0.612s 00:07:45.702 04:22:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.702 04:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:45.702 04:22:48 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:45.702 04:22:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:45.702 04:22:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.702 04:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:45.702 ************************************ 00:07:45.702 START TEST spdk_dd_negative 00:07:45.702 ************************************ 00:07:45.702 04:22:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:45.963 * Looking for test storage... 00:07:45.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:45.963 04:22:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:45.963 04:22:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:45.963 04:22:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:45.963 04:22:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:45.963 04:22:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:45.963 04:22:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:45.963 04:22:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:45.963 04:22:49 -- scripts/common.sh@335 -- # IFS=.-: 00:07:45.963 04:22:49 -- scripts/common.sh@335 -- # read -ra ver1 00:07:45.963 04:22:49 -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.963 04:22:49 -- scripts/common.sh@336 -- # read -ra ver2 00:07:45.963 04:22:49 -- scripts/common.sh@337 -- # local 'op=<' 00:07:45.963 04:22:49 -- scripts/common.sh@339 -- # ver1_l=2 00:07:45.963 04:22:49 -- scripts/common.sh@340 -- # ver2_l=1 00:07:45.963 04:22:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:45.963 04:22:49 -- scripts/common.sh@343 -- # case "$op" in 00:07:45.963 04:22:49 -- scripts/common.sh@344 -- # : 1 00:07:45.963 04:22:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:45.963 04:22:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.963 04:22:49 -- scripts/common.sh@364 -- # decimal 1 00:07:45.963 04:22:49 -- scripts/common.sh@352 -- # local d=1 00:07:45.963 04:22:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.963 04:22:49 -- scripts/common.sh@354 -- # echo 1 00:07:45.963 04:22:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:45.963 04:22:49 -- scripts/common.sh@365 -- # decimal 2 00:07:45.963 04:22:49 -- scripts/common.sh@352 -- # local d=2 00:07:45.963 04:22:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.963 04:22:49 -- scripts/common.sh@354 -- # echo 2 00:07:45.963 04:22:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:45.963 04:22:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:45.963 04:22:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:45.963 04:22:49 -- scripts/common.sh@367 -- # return 0 00:07:45.963 04:22:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.963 04:22:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.963 --rc genhtml_branch_coverage=1 00:07:45.963 --rc genhtml_function_coverage=1 00:07:45.963 --rc genhtml_legend=1 00:07:45.963 --rc geninfo_all_blocks=1 00:07:45.963 --rc geninfo_unexecuted_blocks=1 00:07:45.963 00:07:45.963 ' 00:07:45.963 04:22:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.963 --rc genhtml_branch_coverage=1 00:07:45.963 --rc genhtml_function_coverage=1 00:07:45.963 --rc genhtml_legend=1 00:07:45.963 --rc geninfo_all_blocks=1 00:07:45.963 --rc geninfo_unexecuted_blocks=1 00:07:45.963 00:07:45.963 ' 00:07:45.963 04:22:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.963 --rc genhtml_branch_coverage=1 00:07:45.963 --rc genhtml_function_coverage=1 00:07:45.963 --rc genhtml_legend=1 00:07:45.963 --rc geninfo_all_blocks=1 00:07:45.963 --rc geninfo_unexecuted_blocks=1 00:07:45.963 00:07:45.963 ' 00:07:45.963 04:22:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:45.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.963 --rc genhtml_branch_coverage=1 00:07:45.963 --rc genhtml_function_coverage=1 00:07:45.963 --rc genhtml_legend=1 00:07:45.963 --rc geninfo_all_blocks=1 00:07:45.963 --rc geninfo_unexecuted_blocks=1 00:07:45.963 00:07:45.963 ' 00:07:45.963 04:22:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.963 04:22:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.963 04:22:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.963 04:22:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.963 04:22:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.963 04:22:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.963 04:22:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.963 04:22:49 -- paths/export.sh@5 -- # export PATH 00:07:45.964 04:22:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.964 04:22:49 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.964 04:22:49 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.964 04:22:49 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.964 04:22:49 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.964 04:22:49 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:45.964 04:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:45.964 04:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.964 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:45.964 ************************************ 00:07:45.964 START TEST dd_invalid_arguments 00:07:45.964 ************************************ 00:07:45.964 04:22:49 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:07:45.964 04:22:49 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:45.964 04:22:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:45.964 04:22:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:45.964 04:22:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.964 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.964 04:22:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.964 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.964 04:22:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.964 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.964 04:22:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.964 04:22:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.964 04:22:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:45.964 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:45.964 options: 00:07:45.964 -c, --config JSON config file (default none) 00:07:45.964 --json JSON config file (default none) 00:07:45.964 --json-ignore-init-errors 00:07:45.964 don't exit on invalid config entry 00:07:45.964 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:45.964 -g, --single-file-segments 00:07:45.964 force creating just one hugetlbfs file 00:07:45.964 -h, --help show this usage 00:07:45.964 -i, --shm-id shared memory ID (optional) 00:07:45.964 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:45.964 --lcores lcore to CPU mapping list. The list is in the format: 00:07:45.964 [<,lcores[@CPUs]>...] 00:07:45.964 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:45.964 Within the group, '-' is used for range separator, 00:07:45.964 ',' is used for single number separator. 00:07:45.964 '( )' can be omitted for single element group, 00:07:45.964 '@' can be omitted if cpus and lcores have the same value 00:07:45.964 -n, --mem-channels channel number of memory channels used for DPDK 00:07:45.964 -p, --main-core main (primary) core for DPDK 00:07:45.964 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:45.964 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:45.964 --disable-cpumask-locks Disable CPU core lock files. 00:07:45.964 --silence-noticelog disable notice level logging to stderr 00:07:45.964 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:45.964 -u, --no-pci disable PCI access 00:07:45.964 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:45.964 --max-delay maximum reactor delay (in microseconds) 00:07:45.964 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:45.964 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:45.964 -R, --huge-unlink unlink huge files after initialization 00:07:45.964 -v, --version print SPDK version 00:07:45.964 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:45.964 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:45.964 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:45.964 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:45.964 Tracepoints vary in size and can use more than one trace entry. 00:07:45.964 --rpcs-allowed comma-separated list of permitted RPCS 00:07:45.964 --env-context Opaque context for use of the env implementation 00:07:45.964 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:45.964 --no-huge run without using hugepages 00:07:45.964 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:45.964 -e, --tpoint-group [:] 00:07:45.964 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:07:45.964 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:45.964 [2024-12-07 04:22:49.167488] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:07:45.964 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:45.964 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:45.964 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:45.964 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:45.964 [--------- DD Options ---------] 00:07:45.964 --if Input file. Must specify either --if or --ib. 00:07:45.964 --ib Input bdev. Must specifier either --if or --ib 00:07:45.964 --of Output file. Must specify either --of or --ob. 00:07:45.964 --ob Output bdev. Must specify either --of or --ob. 00:07:45.964 --iflag Input file flags. 00:07:45.964 --oflag Output file flags. 00:07:45.964 --bs I/O unit size (default: 4096) 00:07:45.964 --qd Queue depth (default: 2) 00:07:45.964 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:45.964 --skip Skip this many I/O units at start of input. (default: 0) 00:07:45.964 --seek Skip this many I/O units at start of output. (default: 0) 00:07:45.964 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:45.964 --sparse Enable hole skipping in input target 00:07:45.964 Available iflag and oflag values: 00:07:45.964 append - append mode 00:07:45.964 direct - use direct I/O for data 00:07:45.964 directory - fail unless a directory 00:07:45.964 dsync - use synchronized I/O for data 00:07:45.964 noatime - do not update access time 00:07:45.964 noctty - do not assign controlling terminal from file 00:07:45.964 nofollow - do not follow symlinks 00:07:45.964 nonblock - use non-blocking I/O 00:07:45.964 sync - use synchronized I/O for data and metadata 00:07:45.964 ************************************ 00:07:45.964 END TEST dd_invalid_arguments 00:07:45.964 ************************************ 00:07:45.964 04:22:49 -- common/autotest_common.sh@653 -- # es=2 00:07:45.964 04:22:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.964 04:22:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.964 04:22:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.964 00:07:45.964 real 0m0.078s 00:07:45.964 user 0m0.045s 00:07:45.964 sys 0m0.030s 00:07:45.964 04:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.964 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.224 04:22:49 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:46.224 04:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.224 04:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.224 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.224 ************************************ 00:07:46.224 START TEST dd_double_input 00:07:46.224 ************************************ 00:07:46.224 04:22:49 -- common/autotest_common.sh@1114 -- # double_input 00:07:46.224 04:22:49 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.224 04:22:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:46.224 04:22:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.224 04:22:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.224 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.224 04:22:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.224 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.224 04:22:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.224 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.224 04:22:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.224 04:22:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.224 04:22:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.224 [2024-12-07 04:22:49.293348] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:46.224 ************************************ 00:07:46.224 END TEST dd_double_input 00:07:46.224 ************************************ 00:07:46.225 04:22:49 -- common/autotest_common.sh@653 -- # es=22 00:07:46.225 04:22:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.225 04:22:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.225 04:22:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.225 00:07:46.225 real 0m0.078s 00:07:46.225 user 0m0.047s 00:07:46.225 sys 0m0.029s 00:07:46.225 04:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.225 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.225 04:22:49 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:46.225 04:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.225 04:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.225 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.225 ************************************ 00:07:46.225 START TEST dd_double_output 00:07:46.225 ************************************ 00:07:46.225 04:22:49 -- common/autotest_common.sh@1114 -- # double_output 00:07:46.225 04:22:49 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.225 04:22:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:46.225 04:22:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.225 04:22:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.225 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.225 04:22:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.225 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.225 04:22:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.225 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.225 04:22:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.225 04:22:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.225 04:22:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.225 [2024-12-07 04:22:49.422236] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:46.225 04:22:49 -- common/autotest_common.sh@653 -- # es=22 00:07:46.225 04:22:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.225 04:22:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.225 04:22:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.225 ************************************ 00:07:46.225 END TEST dd_double_output 00:07:46.225 ************************************ 00:07:46.225 00:07:46.225 real 0m0.080s 00:07:46.225 user 0m0.051s 00:07:46.225 sys 0m0.028s 00:07:46.225 04:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.225 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 04:22:49 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:46.485 04:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.485 04:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.485 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 ************************************ 00:07:46.485 START TEST dd_no_input 00:07:46.485 ************************************ 00:07:46.485 04:22:49 -- common/autotest_common.sh@1114 -- # no_input 00:07:46.485 04:22:49 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:46.485 04:22:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:46.485 04:22:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:46.485 04:22:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.485 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.485 04:22:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.485 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.485 04:22:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.485 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.485 04:22:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.485 04:22:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.485 04:22:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:46.485 [2024-12-07 04:22:49.553627] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:07:46.485 04:22:49 -- common/autotest_common.sh@653 -- # es=22 00:07:46.485 04:22:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.485 04:22:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.485 04:22:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.485 00:07:46.485 real 0m0.071s 00:07:46.485 user 0m0.048s 00:07:46.485 sys 0m0.022s 00:07:46.485 ************************************ 00:07:46.485 END TEST dd_no_input 00:07:46.485 ************************************ 00:07:46.485 04:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.485 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 04:22:49 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:46.485 04:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.486 04:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.486 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.486 ************************************ 00:07:46.486 START TEST dd_no_output 00:07:46.486 ************************************ 00:07:46.486 04:22:49 -- common/autotest_common.sh@1114 -- # no_output 00:07:46.486 04:22:49 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.486 04:22:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:46.486 04:22:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.486 04:22:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.486 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.486 04:22:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.486 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.486 04:22:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.486 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.486 04:22:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.486 04:22:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.486 04:22:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.486 [2024-12-07 04:22:49.671676] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:07:46.486 04:22:49 -- common/autotest_common.sh@653 -- # es=22 00:07:46.486 04:22:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.486 04:22:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.486 04:22:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.486 00:07:46.486 real 0m0.070s 00:07:46.486 user 0m0.043s 00:07:46.486 sys 0m0.026s 00:07:46.486 04:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.486 ************************************ 00:07:46.486 END TEST dd_no_output 00:07:46.486 ************************************ 00:07:46.486 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 04:22:49 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:46.745 04:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.745 04:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.745 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 ************************************ 00:07:46.745 START TEST dd_wrong_blocksize 00:07:46.745 ************************************ 00:07:46.745 04:22:49 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:07:46.745 04:22:49 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:46.745 04:22:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:46.745 04:22:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:46.745 04:22:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.745 04:22:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.745 04:22:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.745 04:22:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:46.745 [2024-12-07 04:22:49.794057] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:07:46.745 04:22:49 -- common/autotest_common.sh@653 -- # es=22 00:07:46.745 04:22:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.745 04:22:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.745 04:22:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.745 00:07:46.745 real 0m0.072s 00:07:46.745 user 0m0.047s 00:07:46.745 sys 0m0.024s 00:07:46.745 ************************************ 00:07:46.745 END TEST dd_wrong_blocksize 00:07:46.745 ************************************ 00:07:46.745 04:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:46.745 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 04:22:49 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:46.745 04:22:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.745 04:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.745 04:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.745 ************************************ 00:07:46.745 START TEST dd_smaller_blocksize 00:07:46.745 ************************************ 00:07:46.745 04:22:49 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:07:46.745 04:22:49 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:46.745 04:22:49 -- common/autotest_common.sh@650 -- # local es=0 00:07:46.745 04:22:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:46.745 04:22:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.745 04:22:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.745 04:22:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.745 04:22:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.745 04:22:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:46.745 [2024-12-07 04:22:49.915173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.745 [2024-12-07 04:22:49.915447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59597 ] 00:07:47.004 [2024-12-07 04:22:50.049899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.004 [2024-12-07 04:22:50.117082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.264 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:47.264 [2024-12-07 04:22:50.449540] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:47.264 [2024-12-07 04:22:50.449659] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.523 [2024-12-07 04:22:50.524565] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:47.523 ************************************ 00:07:47.523 END TEST dd_smaller_blocksize 00:07:47.523 ************************************ 00:07:47.523 04:22:50 -- common/autotest_common.sh@653 -- # es=244 00:07:47.523 04:22:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.523 04:22:50 -- common/autotest_common.sh@662 -- # es=116 00:07:47.523 04:22:50 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:47.523 04:22:50 -- common/autotest_common.sh@670 -- # es=1 00:07:47.523 04:22:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.523 00:07:47.523 real 0m0.778s 00:07:47.523 user 0m0.351s 00:07:47.523 sys 0m0.320s 00:07:47.523 04:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.523 04:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:47.523 04:22:50 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:47.523 04:22:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.523 04:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.523 04:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:47.523 ************************************ 00:07:47.523 START TEST dd_invalid_count 00:07:47.523 ************************************ 00:07:47.523 04:22:50 -- common/autotest_common.sh@1114 -- # invalid_count 00:07:47.523 04:22:50 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:47.523 04:22:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:47.523 04:22:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:47.523 04:22:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.523 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.523 04:22:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.523 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.523 04:22:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.523 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.523 04:22:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.523 04:22:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.523 04:22:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:47.523 [2024-12-07 04:22:50.750818] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:07:47.783 04:22:50 -- common/autotest_common.sh@653 -- # es=22 00:07:47.783 ************************************ 00:07:47.783 END TEST dd_invalid_count 00:07:47.783 ************************************ 00:07:47.783 04:22:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.783 04:22:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.783 04:22:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.783 00:07:47.783 real 0m0.073s 00:07:47.783 user 0m0.048s 00:07:47.783 sys 0m0.024s 00:07:47.783 04:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.783 04:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 04:22:50 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:47.783 04:22:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.783 04:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.783 04:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 ************************************ 00:07:47.783 START TEST dd_invalid_oflag 00:07:47.783 ************************************ 00:07:47.783 04:22:50 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:07:47.783 04:22:50 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:47.783 04:22:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:47.783 04:22:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:47.783 04:22:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.783 04:22:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.783 04:22:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.783 04:22:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:47.783 [2024-12-07 04:22:50.875021] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:07:47.783 04:22:50 -- common/autotest_common.sh@653 -- # es=22 00:07:47.783 ************************************ 00:07:47.783 END TEST dd_invalid_oflag 00:07:47.783 ************************************ 00:07:47.783 04:22:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.783 04:22:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.783 04:22:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.783 00:07:47.783 real 0m0.070s 00:07:47.783 user 0m0.041s 00:07:47.783 sys 0m0.028s 00:07:47.783 04:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.783 04:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 04:22:50 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:47.783 04:22:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.783 04:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.783 04:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 ************************************ 00:07:47.783 START TEST dd_invalid_iflag 00:07:47.783 ************************************ 00:07:47.783 04:22:50 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:07:47.783 04:22:50 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:47.783 04:22:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:47.783 04:22:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:47.783 04:22:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.783 04:22:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.783 04:22:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.783 04:22:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.783 04:22:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:47.783 [2024-12-07 04:22:50.991023] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:07:47.783 04:22:51 -- common/autotest_common.sh@653 -- # es=22 00:07:47.783 04:22:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.783 04:22:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.783 04:22:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.783 00:07:47.783 real 0m0.068s 00:07:47.783 user 0m0.052s 00:07:47.783 sys 0m0.015s 00:07:47.783 04:22:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.783 04:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:47.783 ************************************ 00:07:47.783 END TEST dd_invalid_iflag 00:07:47.783 ************************************ 00:07:48.043 04:22:51 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:48.043 04:22:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.043 04:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.043 04:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.043 ************************************ 00:07:48.043 START TEST dd_unknown_flag 00:07:48.043 ************************************ 00:07:48.043 04:22:51 -- common/autotest_common.sh@1114 -- # unknown_flag 00:07:48.043 04:22:51 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:48.043 04:22:51 -- common/autotest_common.sh@650 -- # local es=0 00:07:48.043 04:22:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:48.043 04:22:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.043 04:22:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.043 04:22:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.043 04:22:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.043 04:22:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.044 04:22:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.044 04:22:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.044 04:22:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.044 04:22:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:48.044 [2024-12-07 04:22:51.115764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.044 [2024-12-07 04:22:51.115843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59689 ] 00:07:48.044 [2024-12-07 04:22:51.257550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.302 [2024-12-07 04:22:51.329705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.302 [2024-12-07 04:22:51.389201] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:07:48.302 [2024-12-07 04:22:51.389267] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:48.302 [2024-12-07 04:22:51.389282] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:48.302 [2024-12-07 04:22:51.389296] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.302 [2024-12-07 04:22:51.466789] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:48.561 04:22:51 -- common/autotest_common.sh@653 -- # es=236 00:07:48.561 04:22:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.561 04:22:51 -- common/autotest_common.sh@662 -- # es=108 00:07:48.562 04:22:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:48.562 04:22:51 -- common/autotest_common.sh@670 -- # es=1 00:07:48.562 04:22:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.562 00:07:48.562 real 0m0.528s 00:07:48.562 user 0m0.308s 00:07:48.562 sys 0m0.113s 00:07:48.562 04:22:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.562 04:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.562 ************************************ 00:07:48.562 END TEST dd_unknown_flag 00:07:48.562 ************************************ 00:07:48.562 04:22:51 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:48.562 04:22:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.562 04:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.562 04:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.562 ************************************ 00:07:48.562 START TEST dd_invalid_json 00:07:48.562 ************************************ 00:07:48.562 04:22:51 -- common/autotest_common.sh@1114 -- # invalid_json 00:07:48.562 04:22:51 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:48.562 04:22:51 -- common/autotest_common.sh@650 -- # local es=0 00:07:48.562 04:22:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:48.562 04:22:51 -- dd/negative_dd.sh@95 -- # : 00:07:48.562 04:22:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.562 04:22:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.562 04:22:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.562 04:22:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.562 04:22:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.562 04:22:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.562 04:22:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.562 04:22:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.562 04:22:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:48.562 [2024-12-07 04:22:51.701441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.562 [2024-12-07 04:22:51.701539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59722 ] 00:07:48.821 [2024-12-07 04:22:51.842072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.821 [2024-12-07 04:22:51.914325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.821 [2024-12-07 04:22:51.914479] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:07:48.821 [2024-12-07 04:22:51.914504] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.821 [2024-12-07 04:22:51.914553] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:48.821 04:22:52 -- common/autotest_common.sh@653 -- # es=234 00:07:48.821 04:22:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.821 04:22:52 -- common/autotest_common.sh@662 -- # es=106 00:07:48.821 04:22:52 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:48.821 04:22:52 -- common/autotest_common.sh@670 -- # es=1 00:07:48.821 04:22:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.821 00:07:48.821 real 0m0.387s 00:07:48.821 user 0m0.219s 00:07:48.821 sys 0m0.065s 00:07:48.821 04:22:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.821 04:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:48.821 ************************************ 00:07:48.821 END TEST dd_invalid_json 00:07:48.821 ************************************ 00:07:49.081 00:07:49.081 real 0m3.176s 00:07:49.081 user 0m1.610s 00:07:49.081 sys 0m1.180s 00:07:49.081 04:22:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.081 ************************************ 00:07:49.081 04:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.081 END TEST spdk_dd_negative 00:07:49.081 ************************************ 00:07:49.081 00:07:49.081 real 1m6.001s 00:07:49.081 user 0m41.057s 00:07:49.081 sys 0m15.804s 00:07:49.081 04:22:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.081 04:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.081 ************************************ 00:07:49.081 END TEST spdk_dd 00:07:49.081 ************************************ 00:07:49.081 04:22:52 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:49.081 04:22:52 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:49.081 04:22:52 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:49.081 04:22:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.081 04:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.081 04:22:52 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:49.081 04:22:52 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:49.081 04:22:52 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:49.081 04:22:52 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:49.081 04:22:52 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:49.081 04:22:52 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:49.081 04:22:52 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:49.081 04:22:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:49.081 04:22:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.081 04:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.081 ************************************ 00:07:49.081 START TEST nvmf_tcp 00:07:49.081 ************************************ 00:07:49.081 04:22:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:49.081 * Looking for test storage... 00:07:49.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:49.081 04:22:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.081 04:22:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.081 04:22:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.341 04:22:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.341 04:22:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.341 04:22:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.341 04:22:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.341 04:22:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.341 04:22:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.341 04:22:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.341 04:22:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.341 04:22:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.341 04:22:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.341 04:22:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.341 04:22:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.341 04:22:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.341 04:22:52 -- scripts/common.sh@344 -- # : 1 00:07:49.341 04:22:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.341 04:22:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.341 04:22:52 -- scripts/common.sh@364 -- # decimal 1 00:07:49.341 04:22:52 -- scripts/common.sh@352 -- # local d=1 00:07:49.341 04:22:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.341 04:22:52 -- scripts/common.sh@354 -- # echo 1 00:07:49.341 04:22:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.341 04:22:52 -- scripts/common.sh@365 -- # decimal 2 00:07:49.341 04:22:52 -- scripts/common.sh@352 -- # local d=2 00:07:49.341 04:22:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.341 04:22:52 -- scripts/common.sh@354 -- # echo 2 00:07:49.341 04:22:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.341 04:22:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.341 04:22:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.341 04:22:52 -- scripts/common.sh@367 -- # return 0 00:07:49.341 04:22:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.341 04:22:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.342 --rc genhtml_branch_coverage=1 00:07:49.342 --rc genhtml_function_coverage=1 00:07:49.342 --rc genhtml_legend=1 00:07:49.342 --rc geninfo_all_blocks=1 00:07:49.342 --rc geninfo_unexecuted_blocks=1 00:07:49.342 00:07:49.342 ' 00:07:49.342 04:22:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.342 --rc genhtml_branch_coverage=1 00:07:49.342 --rc genhtml_function_coverage=1 00:07:49.342 --rc genhtml_legend=1 00:07:49.342 --rc geninfo_all_blocks=1 00:07:49.342 --rc geninfo_unexecuted_blocks=1 00:07:49.342 00:07:49.342 ' 00:07:49.342 04:22:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.342 --rc genhtml_branch_coverage=1 00:07:49.342 --rc genhtml_function_coverage=1 00:07:49.342 --rc genhtml_legend=1 00:07:49.342 --rc geninfo_all_blocks=1 00:07:49.342 --rc geninfo_unexecuted_blocks=1 00:07:49.342 00:07:49.342 ' 00:07:49.342 04:22:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.342 --rc genhtml_branch_coverage=1 00:07:49.342 --rc genhtml_function_coverage=1 00:07:49.342 --rc genhtml_legend=1 00:07:49.342 --rc geninfo_all_blocks=1 00:07:49.342 --rc geninfo_unexecuted_blocks=1 00:07:49.342 00:07:49.342 ' 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.342 04:22:52 -- nvmf/common.sh@7 -- # uname -s 00:07:49.342 04:22:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.342 04:22:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.342 04:22:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.342 04:22:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.342 04:22:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.342 04:22:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.342 04:22:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.342 04:22:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.342 04:22:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.342 04:22:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.342 04:22:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:07:49.342 04:22:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:07:49.342 04:22:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.342 04:22:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.342 04:22:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:49.342 04:22:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.342 04:22:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.342 04:22:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.342 04:22:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.342 04:22:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.342 04:22:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.342 04:22:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.342 04:22:52 -- paths/export.sh@5 -- # export PATH 00:07:49.342 04:22:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.342 04:22:52 -- nvmf/common.sh@46 -- # : 0 00:07:49.342 04:22:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:49.342 04:22:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:49.342 04:22:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:49.342 04:22:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.342 04:22:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.342 04:22:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:49.342 04:22:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:49.342 04:22:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:49.342 04:22:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.342 04:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:49.342 04:22:52 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:49.342 04:22:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:49.342 04:22:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.342 04:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.342 ************************************ 00:07:49.342 START TEST nvmf_host_management 00:07:49.342 ************************************ 00:07:49.342 04:22:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:49.342 * Looking for test storage... 00:07:49.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:49.342 04:22:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.342 04:22:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.342 04:22:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.602 04:22:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.602 04:22:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.602 04:22:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.602 04:22:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.602 04:22:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.602 04:22:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.602 04:22:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.602 04:22:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.602 04:22:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.602 04:22:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.602 04:22:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.602 04:22:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.602 04:22:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.602 04:22:52 -- scripts/common.sh@344 -- # : 1 00:07:49.602 04:22:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.602 04:22:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.602 04:22:52 -- scripts/common.sh@364 -- # decimal 1 00:07:49.602 04:22:52 -- scripts/common.sh@352 -- # local d=1 00:07:49.602 04:22:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.602 04:22:52 -- scripts/common.sh@354 -- # echo 1 00:07:49.602 04:22:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.602 04:22:52 -- scripts/common.sh@365 -- # decimal 2 00:07:49.602 04:22:52 -- scripts/common.sh@352 -- # local d=2 00:07:49.602 04:22:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.602 04:22:52 -- scripts/common.sh@354 -- # echo 2 00:07:49.602 04:22:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.602 04:22:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.602 04:22:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.602 04:22:52 -- scripts/common.sh@367 -- # return 0 00:07:49.602 04:22:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.602 04:22:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.602 --rc genhtml_branch_coverage=1 00:07:49.602 --rc genhtml_function_coverage=1 00:07:49.602 --rc genhtml_legend=1 00:07:49.602 --rc geninfo_all_blocks=1 00:07:49.602 --rc geninfo_unexecuted_blocks=1 00:07:49.602 00:07:49.602 ' 00:07:49.602 04:22:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.602 --rc genhtml_branch_coverage=1 00:07:49.602 --rc genhtml_function_coverage=1 00:07:49.602 --rc genhtml_legend=1 00:07:49.602 --rc geninfo_all_blocks=1 00:07:49.602 --rc geninfo_unexecuted_blocks=1 00:07:49.602 00:07:49.602 ' 00:07:49.602 04:22:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.602 --rc genhtml_branch_coverage=1 00:07:49.602 --rc genhtml_function_coverage=1 00:07:49.602 --rc genhtml_legend=1 00:07:49.602 --rc geninfo_all_blocks=1 00:07:49.602 --rc geninfo_unexecuted_blocks=1 00:07:49.602 00:07:49.602 ' 00:07:49.602 04:22:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.603 --rc genhtml_branch_coverage=1 00:07:49.603 --rc genhtml_function_coverage=1 00:07:49.603 --rc genhtml_legend=1 00:07:49.603 --rc geninfo_all_blocks=1 00:07:49.603 --rc geninfo_unexecuted_blocks=1 00:07:49.603 00:07:49.603 ' 00:07:49.603 04:22:52 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.603 04:22:52 -- nvmf/common.sh@7 -- # uname -s 00:07:49.603 04:22:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.603 04:22:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.603 04:22:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.603 04:22:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.603 04:22:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.603 04:22:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.603 04:22:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.603 04:22:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.603 04:22:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.603 04:22:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.603 04:22:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:07:49.603 04:22:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:07:49.603 04:22:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.603 04:22:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.603 04:22:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:49.603 04:22:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.603 04:22:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.603 04:22:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.603 04:22:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.603 04:22:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.603 04:22:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.603 04:22:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.603 04:22:52 -- paths/export.sh@5 -- # export PATH 00:07:49.603 04:22:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.603 04:22:52 -- nvmf/common.sh@46 -- # : 0 00:07:49.603 04:22:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:49.603 04:22:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:49.603 04:22:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:49.603 04:22:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.603 04:22:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.603 04:22:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:49.603 04:22:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:49.603 04:22:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:49.603 04:22:52 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.603 04:22:52 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.603 04:22:52 -- target/host_management.sh@104 -- # nvmftestinit 00:07:49.603 04:22:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:49.603 04:22:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.603 04:22:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:49.603 04:22:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:49.603 04:22:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:49.603 04:22:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.603 04:22:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.603 04:22:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.603 04:22:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:49.603 04:22:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:49.603 04:22:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:49.603 04:22:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:49.603 04:22:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:49.603 04:22:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:49.603 04:22:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.603 04:22:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.603 04:22:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:49.603 04:22:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:49.603 04:22:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:49.603 04:22:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:49.603 04:22:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:49.603 04:22:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.603 04:22:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:49.603 04:22:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:49.603 04:22:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:49.603 04:22:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:49.603 04:22:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:49.603 Cannot find device "nvmf_init_br" 00:07:49.603 04:22:52 -- nvmf/common.sh@153 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:49.603 Cannot find device "nvmf_tgt_br" 00:07:49.603 04:22:52 -- nvmf/common.sh@154 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:49.603 Cannot find device "nvmf_tgt_br2" 00:07:49.603 04:22:52 -- nvmf/common.sh@155 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:49.603 Cannot find device "nvmf_init_br" 00:07:49.603 04:22:52 -- nvmf/common.sh@156 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:49.603 Cannot find device "nvmf_tgt_br" 00:07:49.603 04:22:52 -- nvmf/common.sh@157 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:49.603 Cannot find device "nvmf_tgt_br2" 00:07:49.603 04:22:52 -- nvmf/common.sh@158 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:49.603 Cannot find device "nvmf_br" 00:07:49.603 04:22:52 -- nvmf/common.sh@159 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:49.603 Cannot find device "nvmf_init_if" 00:07:49.603 04:22:52 -- nvmf/common.sh@160 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:49.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.603 04:22:52 -- nvmf/common.sh@161 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:49.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.603 04:22:52 -- nvmf/common.sh@162 -- # true 00:07:49.603 04:22:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:49.603 04:22:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:49.603 04:22:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:49.603 04:22:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:49.603 04:22:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:49.603 04:22:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:49.603 04:22:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:49.603 04:22:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:49.603 04:22:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:49.603 04:22:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:49.603 04:22:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:49.603 04:22:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:49.862 04:22:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:49.863 04:22:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:49.863 04:22:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:49.863 04:22:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:49.863 04:22:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:49.863 04:22:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:49.863 04:22:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:49.863 04:22:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:49.863 04:22:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:49.863 04:22:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:49.863 04:22:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:49.863 04:22:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:49.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:07:49.863 00:07:49.863 --- 10.0.0.2 ping statistics --- 00:07:49.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.863 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:49.863 04:22:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:49.863 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:49.863 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:07:49.863 00:07:49.863 --- 10.0.0.3 ping statistics --- 00:07:49.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.863 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:49.863 04:22:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:49.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:49.863 00:07:49.863 --- 10.0.0.1 ping statistics --- 00:07:49.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.863 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:49.863 04:22:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.863 04:22:53 -- nvmf/common.sh@421 -- # return 0 00:07:49.863 04:22:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:49.863 04:22:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.863 04:22:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:49.863 04:22:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:49.863 04:22:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.863 04:22:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:49.863 04:22:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:49.863 04:22:53 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:49.863 04:22:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.863 04:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.863 04:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:49.863 ************************************ 00:07:49.863 START TEST nvmf_host_management 00:07:49.863 ************************************ 00:07:49.863 04:22:53 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:07:49.863 04:22:53 -- target/host_management.sh@69 -- # starttarget 00:07:49.863 04:22:53 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:49.863 04:22:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:49.863 04:22:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.863 04:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:49.863 04:22:53 -- nvmf/common.sh@469 -- # nvmfpid=59999 00:07:49.863 04:22:53 -- nvmf/common.sh@470 -- # waitforlisten 59999 00:07:49.863 04:22:53 -- common/autotest_common.sh@829 -- # '[' -z 59999 ']' 00:07:49.863 04:22:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:49.863 04:22:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.863 04:22:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.863 04:22:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.863 04:22:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.863 04:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:50.123 [2024-12-07 04:22:53.136797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.123 [2024-12-07 04:22:53.136930] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.123 [2024-12-07 04:22:53.285272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.381 [2024-12-07 04:22:53.376421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.381 [2024-12-07 04:22:53.376593] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.381 [2024-12-07 04:22:53.376611] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.381 [2024-12-07 04:22:53.376622] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.381 [2024-12-07 04:22:53.376769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.381 [2024-12-07 04:22:53.376852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.381 [2024-12-07 04:22:53.377630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.381 [2024-12-07 04:22:53.377672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.947 04:22:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.947 04:22:54 -- common/autotest_common.sh@862 -- # return 0 00:07:50.947 04:22:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:50.947 04:22:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.947 04:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:50.947 04:22:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.947 04:22:54 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.947 04:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.947 04:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:50.947 [2024-12-07 04:22:54.184196] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.206 04:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.206 04:22:54 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:51.206 04:22:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.206 04:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:51.206 04:22:54 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:51.206 04:22:54 -- target/host_management.sh@23 -- # cat 00:07:51.206 04:22:54 -- target/host_management.sh@30 -- # rpc_cmd 00:07:51.206 04:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.206 04:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:51.206 Malloc0 00:07:51.206 [2024-12-07 04:22:54.258128] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.206 04:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.206 04:22:54 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:51.206 04:22:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.206 04:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:51.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.206 04:22:54 -- target/host_management.sh@73 -- # perfpid=60059 00:07:51.206 04:22:54 -- target/host_management.sh@74 -- # waitforlisten 60059 /var/tmp/bdevperf.sock 00:07:51.206 04:22:54 -- common/autotest_common.sh@829 -- # '[' -z 60059 ']' 00:07:51.206 04:22:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.206 04:22:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.206 04:22:54 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:51.206 04:22:54 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:51.206 04:22:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.206 04:22:54 -- nvmf/common.sh@520 -- # config=() 00:07:51.206 04:22:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.206 04:22:54 -- nvmf/common.sh@520 -- # local subsystem config 00:07:51.206 04:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:51.206 04:22:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:51.206 04:22:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:51.206 { 00:07:51.206 "params": { 00:07:51.206 "name": "Nvme$subsystem", 00:07:51.206 "trtype": "$TEST_TRANSPORT", 00:07:51.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.206 "adrfam": "ipv4", 00:07:51.206 "trsvcid": "$NVMF_PORT", 00:07:51.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.206 "hdgst": ${hdgst:-false}, 00:07:51.206 "ddgst": ${ddgst:-false} 00:07:51.206 }, 00:07:51.206 "method": "bdev_nvme_attach_controller" 00:07:51.206 } 00:07:51.206 EOF 00:07:51.206 )") 00:07:51.206 04:22:54 -- nvmf/common.sh@542 -- # cat 00:07:51.206 04:22:54 -- nvmf/common.sh@544 -- # jq . 00:07:51.206 04:22:54 -- nvmf/common.sh@545 -- # IFS=, 00:07:51.206 04:22:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:51.206 "params": { 00:07:51.206 "name": "Nvme0", 00:07:51.206 "trtype": "tcp", 00:07:51.206 "traddr": "10.0.0.2", 00:07:51.206 "adrfam": "ipv4", 00:07:51.206 "trsvcid": "4420", 00:07:51.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.206 "hdgst": false, 00:07:51.206 "ddgst": false 00:07:51.206 }, 00:07:51.206 "method": "bdev_nvme_attach_controller" 00:07:51.206 }' 00:07:51.206 [2024-12-07 04:22:54.378057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.206 [2024-12-07 04:22:54.378179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:07:51.530 [2024-12-07 04:22:54.526447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.530 [2024-12-07 04:22:54.596911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.530 Running I/O for 10 seconds... 00:07:52.463 04:22:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.463 04:22:55 -- common/autotest_common.sh@862 -- # return 0 00:07:52.463 04:22:55 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:52.463 04:22:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.463 04:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:52.463 04:22:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.463 04:22:55 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.463 04:22:55 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:52.463 04:22:55 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:52.463 04:22:55 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:52.463 04:22:55 -- target/host_management.sh@52 -- # local ret=1 00:07:52.463 04:22:55 -- target/host_management.sh@53 -- # local i 00:07:52.463 04:22:55 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:52.463 04:22:55 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:52.463 04:22:55 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:52.463 04:22:55 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.463 04:22:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.463 04:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:52.463 04:22:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.463 04:22:55 -- target/host_management.sh@55 -- # read_io_count=1745 00:07:52.463 04:22:55 -- target/host_management.sh@58 -- # '[' 1745 -ge 100 ']' 00:07:52.463 04:22:55 -- target/host_management.sh@59 -- # ret=0 00:07:52.463 04:22:55 -- target/host_management.sh@60 -- # break 00:07:52.463 04:22:55 -- target/host_management.sh@64 -- # return 0 00:07:52.463 04:22:55 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.463 04:22:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.463 04:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:52.463 [2024-12-07 04:22:55.458158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.463 [2024-12-07 04:22:55.458207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.463 [2024-12-07 04:22:55.458219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.463 [2024-12-07 04:22:55.458229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.463 [2024-12-07 04:22:55.458238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.463 [2024-12-07 04:22:55.458247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.463 [2024-12-07 04:22:55.458255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cd00 is same with the state(5) to be set 00:07:52.464 [2024-12-07 04:22:55.458781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.458818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.458863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.458882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.458901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.458917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.458934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.458949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.458968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.458984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.459977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.459998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.460014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.460033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.464 [2024-12-07 04:22:55.460049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.464 [2024-12-07 04:22:55.460067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.460975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.460991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.461010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.461028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.461047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.461064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.461083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.461099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.461118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.465 [2024-12-07 04:22:55.461133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.461155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2006400 is same with the state(5) to be set 00:07:52.465 [2024-12-07 04:22:55.461221] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2006400 was disconnected and freed. reset controller. 00:07:52.465 task offset: 109696 on job bdev=Nvme0n1 fails 00:07:52.465 00:07:52.465 Latency(us) 00:07:52.465 [2024-12-07T04:22:55.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.465 [2024-12-07T04:22:55.705Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.465 [2024-12-07T04:22:55.705Z] Job: Nvme0n1 ended in about 0.72 seconds with error 00:07:52.465 Verification LBA range: start 0x0 length 0x400 00:07:52.465 Nvme0n1 : 0.72 2579.08 161.19 88.84 0.00 23619.59 6881.28 30980.65 00:07:52.465 [2024-12-07T04:22:55.705Z] =================================================================================================================== 00:07:52.465 [2024-12-07T04:22:55.705Z] Total : 2579.08 161.19 88.84 0.00 23619.59 6881.28 30980.65 00:07:52.465 [2024-12-07 04:22:55.462707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:52.465 [2024-12-07 04:22:55.465243] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.465 [2024-12-07 04:22:55.465281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202c150 (9): Bad file descriptor 00:07:52.465 04:22:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.465 [2024-12-07 04:22:55.468705] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:52.465 [2024-12-07 04:22:55.468801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:52.465 [2024-12-07 04:22:55.468835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.465 [2024-12-07 04:22:55.468866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:52.465 [2024-12-07 04:22:55.468884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:52.465 [2024-12-07 04:22:55.468899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:52.465 [2024-12-07 04:22:55.468915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202c150 00:07:52.465 [2024-12-07 04:22:55.468969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202c150 (9): Bad file descriptor 00:07:52.465 [2024-12-07 04:22:55.468997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:52.466 [2024-12-07 04:22:55.469012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:52.466 [2024-12-07 04:22:55.469029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:52.466 [2024-12-07 04:22:55.469055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:52.466 04:22:55 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.466 04:22:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.466 04:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:52.466 04:22:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.466 04:22:55 -- target/host_management.sh@87 -- # sleep 1 00:07:53.400 04:22:56 -- target/host_management.sh@91 -- # kill -9 60059 00:07:53.400 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (60059) - No such process 00:07:53.400 04:22:56 -- target/host_management.sh@91 -- # true 00:07:53.400 04:22:56 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:53.400 04:22:56 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:53.400 04:22:56 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:53.400 04:22:56 -- nvmf/common.sh@520 -- # config=() 00:07:53.400 04:22:56 -- nvmf/common.sh@520 -- # local subsystem config 00:07:53.400 04:22:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:53.400 04:22:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:53.400 { 00:07:53.400 "params": { 00:07:53.400 "name": "Nvme$subsystem", 00:07:53.400 "trtype": "$TEST_TRANSPORT", 00:07:53.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.400 "adrfam": "ipv4", 00:07:53.400 "trsvcid": "$NVMF_PORT", 00:07:53.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.400 "hdgst": ${hdgst:-false}, 00:07:53.400 "ddgst": ${ddgst:-false} 00:07:53.400 }, 00:07:53.400 "method": "bdev_nvme_attach_controller" 00:07:53.400 } 00:07:53.400 EOF 00:07:53.400 )") 00:07:53.400 04:22:56 -- nvmf/common.sh@542 -- # cat 00:07:53.400 04:22:56 -- nvmf/common.sh@544 -- # jq . 00:07:53.400 04:22:56 -- nvmf/common.sh@545 -- # IFS=, 00:07:53.400 04:22:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:53.400 "params": { 00:07:53.400 "name": "Nvme0", 00:07:53.400 "trtype": "tcp", 00:07:53.400 "traddr": "10.0.0.2", 00:07:53.400 "adrfam": "ipv4", 00:07:53.400 "trsvcid": "4420", 00:07:53.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.400 "hdgst": false, 00:07:53.400 "ddgst": false 00:07:53.400 }, 00:07:53.400 "method": "bdev_nvme_attach_controller" 00:07:53.400 }' 00:07:53.400 [2024-12-07 04:22:56.558390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.400 [2024-12-07 04:22:56.558476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:07:53.658 [2024-12-07 04:22:56.699208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.658 [2024-12-07 04:22:56.768557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.916 Running I/O for 1 seconds... 00:07:54.851 00:07:54.851 Latency(us) 00:07:54.851 [2024-12-07T04:22:58.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.851 [2024-12-07T04:22:58.091Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:54.851 Verification LBA range: start 0x0 length 0x400 00:07:54.851 Nvme0n1 : 1.01 2702.67 168.92 0.00 0.00 23265.15 1623.51 32172.22 00:07:54.851 [2024-12-07T04:22:58.091Z] =================================================================================================================== 00:07:54.851 [2024-12-07T04:22:58.091Z] Total : 2702.67 168.92 0.00 0.00 23265.15 1623.51 32172.22 00:07:55.109 04:22:58 -- target/host_management.sh@101 -- # stoptarget 00:07:55.109 04:22:58 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:55.109 04:22:58 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:55.109 04:22:58 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:55.109 04:22:58 -- target/host_management.sh@40 -- # nvmftestfini 00:07:55.109 04:22:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:55.109 04:22:58 -- nvmf/common.sh@116 -- # sync 00:07:55.109 04:22:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:55.109 04:22:58 -- nvmf/common.sh@119 -- # set +e 00:07:55.109 04:22:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:55.109 04:22:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:55.109 rmmod nvme_tcp 00:07:55.109 rmmod nvme_fabrics 00:07:55.109 rmmod nvme_keyring 00:07:55.109 04:22:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:55.109 04:22:58 -- nvmf/common.sh@123 -- # set -e 00:07:55.109 04:22:58 -- nvmf/common.sh@124 -- # return 0 00:07:55.109 04:22:58 -- nvmf/common.sh@477 -- # '[' -n 59999 ']' 00:07:55.109 04:22:58 -- nvmf/common.sh@478 -- # killprocess 59999 00:07:55.109 04:22:58 -- common/autotest_common.sh@936 -- # '[' -z 59999 ']' 00:07:55.109 04:22:58 -- common/autotest_common.sh@940 -- # kill -0 59999 00:07:55.109 04:22:58 -- common/autotest_common.sh@941 -- # uname 00:07:55.109 04:22:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.109 04:22:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59999 00:07:55.109 killing process with pid 59999 00:07:55.109 04:22:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:55.109 04:22:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:55.109 04:22:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59999' 00:07:55.109 04:22:58 -- common/autotest_common.sh@955 -- # kill 59999 00:07:55.109 04:22:58 -- common/autotest_common.sh@960 -- # wait 59999 00:07:55.368 [2024-12-07 04:22:58.424599] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:55.368 04:22:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:55.368 04:22:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:55.368 04:22:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:55.368 04:22:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.368 04:22:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:55.368 04:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.368 04:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.368 04:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.368 04:22:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:55.368 00:07:55.368 real 0m5.427s 00:07:55.368 user 0m22.906s 00:07:55.368 sys 0m1.197s 00:07:55.368 04:22:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.368 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:07:55.368 ************************************ 00:07:55.368 END TEST nvmf_host_management 00:07:55.368 ************************************ 00:07:55.368 04:22:58 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:07:55.368 00:07:55.368 real 0m6.094s 00:07:55.368 user 0m23.117s 00:07:55.368 sys 0m1.449s 00:07:55.368 04:22:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.368 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:07:55.368 ************************************ 00:07:55.368 END TEST nvmf_host_management 00:07:55.368 ************************************ 00:07:55.368 04:22:58 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:55.368 04:22:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:55.368 04:22:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.368 04:22:58 -- common/autotest_common.sh@10 -- # set +x 00:07:55.368 ************************************ 00:07:55.368 START TEST nvmf_lvol 00:07:55.368 ************************************ 00:07:55.368 04:22:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:55.627 * Looking for test storage... 00:07:55.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:55.627 04:22:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:55.627 04:22:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:55.627 04:22:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:55.627 04:22:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:55.627 04:22:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:55.627 04:22:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:55.627 04:22:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:55.627 04:22:58 -- scripts/common.sh@335 -- # IFS=.-: 00:07:55.627 04:22:58 -- scripts/common.sh@335 -- # read -ra ver1 00:07:55.627 04:22:58 -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.627 04:22:58 -- scripts/common.sh@336 -- # read -ra ver2 00:07:55.627 04:22:58 -- scripts/common.sh@337 -- # local 'op=<' 00:07:55.627 04:22:58 -- scripts/common.sh@339 -- # ver1_l=2 00:07:55.627 04:22:58 -- scripts/common.sh@340 -- # ver2_l=1 00:07:55.627 04:22:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:55.628 04:22:58 -- scripts/common.sh@343 -- # case "$op" in 00:07:55.628 04:22:58 -- scripts/common.sh@344 -- # : 1 00:07:55.628 04:22:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:55.628 04:22:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.628 04:22:58 -- scripts/common.sh@364 -- # decimal 1 00:07:55.628 04:22:58 -- scripts/common.sh@352 -- # local d=1 00:07:55.628 04:22:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.628 04:22:58 -- scripts/common.sh@354 -- # echo 1 00:07:55.628 04:22:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:55.628 04:22:58 -- scripts/common.sh@365 -- # decimal 2 00:07:55.628 04:22:58 -- scripts/common.sh@352 -- # local d=2 00:07:55.628 04:22:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.628 04:22:58 -- scripts/common.sh@354 -- # echo 2 00:07:55.628 04:22:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:55.628 04:22:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:55.628 04:22:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:55.628 04:22:58 -- scripts/common.sh@367 -- # return 0 00:07:55.628 04:22:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.628 04:22:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.628 --rc genhtml_branch_coverage=1 00:07:55.628 --rc genhtml_function_coverage=1 00:07:55.628 --rc genhtml_legend=1 00:07:55.628 --rc geninfo_all_blocks=1 00:07:55.628 --rc geninfo_unexecuted_blocks=1 00:07:55.628 00:07:55.628 ' 00:07:55.628 04:22:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.628 --rc genhtml_branch_coverage=1 00:07:55.628 --rc genhtml_function_coverage=1 00:07:55.628 --rc genhtml_legend=1 00:07:55.628 --rc geninfo_all_blocks=1 00:07:55.628 --rc geninfo_unexecuted_blocks=1 00:07:55.628 00:07:55.628 ' 00:07:55.628 04:22:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.628 --rc genhtml_branch_coverage=1 00:07:55.628 --rc genhtml_function_coverage=1 00:07:55.628 --rc genhtml_legend=1 00:07:55.628 --rc geninfo_all_blocks=1 00:07:55.628 --rc geninfo_unexecuted_blocks=1 00:07:55.628 00:07:55.628 ' 00:07:55.628 04:22:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:55.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.628 --rc genhtml_branch_coverage=1 00:07:55.628 --rc genhtml_function_coverage=1 00:07:55.628 --rc genhtml_legend=1 00:07:55.628 --rc geninfo_all_blocks=1 00:07:55.628 --rc geninfo_unexecuted_blocks=1 00:07:55.628 00:07:55.628 ' 00:07:55.628 04:22:58 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:55.628 04:22:58 -- nvmf/common.sh@7 -- # uname -s 00:07:55.628 04:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.628 04:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.628 04:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.628 04:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.628 04:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.628 04:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.628 04:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.628 04:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.628 04:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.628 04:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.628 04:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:07:55.628 04:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:07:55.628 04:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.628 04:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.628 04:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:55.628 04:22:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.628 04:22:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.628 04:22:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.628 04:22:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.628 04:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.628 04:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.628 04:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.628 04:22:58 -- paths/export.sh@5 -- # export PATH 00:07:55.628 04:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.628 04:22:58 -- nvmf/common.sh@46 -- # : 0 00:07:55.628 04:22:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:55.628 04:22:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:55.628 04:22:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:55.628 04:22:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.628 04:22:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.628 04:22:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:55.628 04:22:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:55.628 04:22:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:55.628 04:22:58 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.628 04:22:58 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:55.628 04:22:58 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:55.628 04:22:58 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:55.628 04:22:58 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.628 04:22:58 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:55.628 04:22:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:55.628 04:22:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.628 04:22:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:55.628 04:22:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:55.628 04:22:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:55.628 04:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.628 04:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.628 04:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.628 04:22:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:55.628 04:22:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:55.628 04:22:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:55.628 04:22:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:55.628 04:22:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:55.628 04:22:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:55.629 04:22:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.629 04:22:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.629 04:22:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:55.629 04:22:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:55.629 04:22:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:55.629 04:22:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:55.629 04:22:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:55.629 04:22:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.629 04:22:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:55.629 04:22:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:55.629 04:22:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:55.629 04:22:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:55.629 04:22:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:55.629 04:22:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:55.629 Cannot find device "nvmf_tgt_br" 00:07:55.629 04:22:58 -- nvmf/common.sh@154 -- # true 00:07:55.629 04:22:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.629 Cannot find device "nvmf_tgt_br2" 00:07:55.629 04:22:58 -- nvmf/common.sh@155 -- # true 00:07:55.629 04:22:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:55.629 04:22:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:55.629 Cannot find device "nvmf_tgt_br" 00:07:55.629 04:22:58 -- nvmf/common.sh@157 -- # true 00:07:55.629 04:22:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:55.629 Cannot find device "nvmf_tgt_br2" 00:07:55.629 04:22:58 -- nvmf/common.sh@158 -- # true 00:07:55.629 04:22:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:55.888 04:22:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:55.888 04:22:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.888 04:22:58 -- nvmf/common.sh@161 -- # true 00:07:55.888 04:22:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.888 04:22:58 -- nvmf/common.sh@162 -- # true 00:07:55.888 04:22:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:55.888 04:22:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:55.888 04:22:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:55.888 04:22:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:55.888 04:22:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.888 04:22:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:55.888 04:22:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:55.888 04:22:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:55.888 04:22:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:55.888 04:22:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:55.888 04:22:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:55.888 04:22:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:55.888 04:22:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:55.888 04:22:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:55.888 04:22:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:55.888 04:22:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:55.888 04:22:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:55.888 04:22:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:55.888 04:22:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:55.888 04:22:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:55.888 04:22:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:55.888 04:22:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:55.888 04:22:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:55.888 04:22:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:55.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:07:55.888 00:07:55.888 --- 10.0.0.2 ping statistics --- 00:07:55.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.888 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:55.888 04:22:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:55.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:55.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:07:55.888 00:07:55.888 --- 10.0.0.3 ping statistics --- 00:07:55.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.888 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:55.888 04:22:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:55.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:55.888 00:07:55.888 --- 10.0.0.1 ping statistics --- 00:07:55.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.888 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:55.888 04:22:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.888 04:22:59 -- nvmf/common.sh@421 -- # return 0 00:07:55.888 04:22:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:55.888 04:22:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.888 04:22:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:55.888 04:22:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:55.888 04:22:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.888 04:22:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:55.888 04:22:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:55.888 04:22:59 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:55.888 04:22:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:55.888 04:22:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.888 04:22:59 -- common/autotest_common.sh@10 -- # set +x 00:07:55.888 04:22:59 -- nvmf/common.sh@469 -- # nvmfpid=60330 00:07:55.888 04:22:59 -- nvmf/common.sh@470 -- # waitforlisten 60330 00:07:55.888 04:22:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:55.888 04:22:59 -- common/autotest_common.sh@829 -- # '[' -z 60330 ']' 00:07:55.888 04:22:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.888 04:22:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.888 04:22:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.888 04:22:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.888 04:22:59 -- common/autotest_common.sh@10 -- # set +x 00:07:56.147 [2024-12-07 04:22:59.155341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.147 [2024-12-07 04:22:59.155463] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.147 [2024-12-07 04:22:59.294339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.147 [2024-12-07 04:22:59.362353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.147 [2024-12-07 04:22:59.362552] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.147 [2024-12-07 04:22:59.362568] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.147 [2024-12-07 04:22:59.362578] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.147 [2024-12-07 04:22:59.362719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.147 [2024-12-07 04:22:59.362893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.147 [2024-12-07 04:22:59.362899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.082 04:23:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.082 04:23:00 -- common/autotest_common.sh@862 -- # return 0 00:07:57.082 04:23:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:57.082 04:23:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.082 04:23:00 -- common/autotest_common.sh@10 -- # set +x 00:07:57.082 04:23:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.082 04:23:00 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:57.341 [2024-12-07 04:23:00.471222] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.341 04:23:00 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:57.599 04:23:00 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:57.599 04:23:00 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:57.857 04:23:01 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:57.857 04:23:01 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:58.115 04:23:01 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:58.373 04:23:01 -- target/nvmf_lvol.sh@29 -- # lvs=7dd58394-0fef-456f-84c3-b54c17349670 00:07:58.373 04:23:01 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7dd58394-0fef-456f-84c3-b54c17349670 lvol 20 00:07:58.631 04:23:01 -- target/nvmf_lvol.sh@32 -- # lvol=082dacf9-9f68-4e09-8b0e-633fed272a58 00:07:58.631 04:23:01 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:58.888 04:23:02 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 082dacf9-9f68-4e09-8b0e-633fed272a58 00:07:59.146 04:23:02 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:59.403 [2024-12-07 04:23:02.443167] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.403 04:23:02 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.662 04:23:02 -- target/nvmf_lvol.sh@42 -- # perf_pid=60406 00:07:59.662 04:23:02 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:59.662 04:23:02 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:00.598 04:23:03 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 082dacf9-9f68-4e09-8b0e-633fed272a58 MY_SNAPSHOT 00:08:00.857 04:23:04 -- target/nvmf_lvol.sh@47 -- # snapshot=6e51058a-5e77-403d-8226-1cffb7c55402 00:08:00.857 04:23:04 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 082dacf9-9f68-4e09-8b0e-633fed272a58 30 00:08:01.116 04:23:04 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 6e51058a-5e77-403d-8226-1cffb7c55402 MY_CLONE 00:08:01.374 04:23:04 -- target/nvmf_lvol.sh@49 -- # clone=ab3d2e1d-ba33-4eef-bc29-967ed28d33f7 00:08:01.374 04:23:04 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ab3d2e1d-ba33-4eef-bc29-967ed28d33f7 00:08:01.941 04:23:04 -- target/nvmf_lvol.sh@53 -- # wait 60406 00:08:10.065 Initializing NVMe Controllers 00:08:10.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:10.065 Controller IO queue size 128, less than required. 00:08:10.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:10.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:10.065 Initialization complete. Launching workers. 00:08:10.065 ======================================================== 00:08:10.065 Latency(us) 00:08:10.065 Device Information : IOPS MiB/s Average min max 00:08:10.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10248.60 40.03 12490.68 1165.57 55805.74 00:08:10.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10238.50 39.99 12506.31 2804.12 47727.50 00:08:10.065 ======================================================== 00:08:10.065 Total : 20487.09 80.03 12498.49 1165.57 55805.74 00:08:10.065 00:08:10.065 04:23:13 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.065 04:23:13 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 082dacf9-9f68-4e09-8b0e-633fed272a58 00:08:10.325 04:23:13 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7dd58394-0fef-456f-84c3-b54c17349670 00:08:10.583 04:23:13 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:10.583 04:23:13 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:10.583 04:23:13 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:10.583 04:23:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:10.583 04:23:13 -- nvmf/common.sh@116 -- # sync 00:08:10.842 04:23:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:10.842 04:23:13 -- nvmf/common.sh@119 -- # set +e 00:08:10.842 04:23:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:10.842 04:23:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:10.842 rmmod nvme_tcp 00:08:10.842 rmmod nvme_fabrics 00:08:10.842 rmmod nvme_keyring 00:08:10.842 04:23:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:10.842 04:23:13 -- nvmf/common.sh@123 -- # set -e 00:08:10.842 04:23:13 -- nvmf/common.sh@124 -- # return 0 00:08:10.842 04:23:13 -- nvmf/common.sh@477 -- # '[' -n 60330 ']' 00:08:10.842 04:23:13 -- nvmf/common.sh@478 -- # killprocess 60330 00:08:10.842 04:23:13 -- common/autotest_common.sh@936 -- # '[' -z 60330 ']' 00:08:10.842 04:23:13 -- common/autotest_common.sh@940 -- # kill -0 60330 00:08:10.842 04:23:13 -- common/autotest_common.sh@941 -- # uname 00:08:10.842 04:23:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:10.842 04:23:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60330 00:08:10.842 04:23:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:10.842 killing process with pid 60330 00:08:10.842 04:23:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:10.842 04:23:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60330' 00:08:10.842 04:23:13 -- common/autotest_common.sh@955 -- # kill 60330 00:08:10.842 04:23:13 -- common/autotest_common.sh@960 -- # wait 60330 00:08:11.101 04:23:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:11.101 04:23:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:11.101 04:23:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:11.101 04:23:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.101 04:23:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:11.101 04:23:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.101 04:23:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.101 04:23:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.101 04:23:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:11.101 00:08:11.101 real 0m15.589s 00:08:11.101 user 1m4.530s 00:08:11.101 sys 0m4.551s 00:08:11.101 04:23:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.101 04:23:14 -- common/autotest_common.sh@10 -- # set +x 00:08:11.101 ************************************ 00:08:11.101 END TEST nvmf_lvol 00:08:11.101 ************************************ 00:08:11.101 04:23:14 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:11.101 04:23:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:11.101 04:23:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.101 04:23:14 -- common/autotest_common.sh@10 -- # set +x 00:08:11.101 ************************************ 00:08:11.101 START TEST nvmf_lvs_grow 00:08:11.101 ************************************ 00:08:11.101 04:23:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:11.101 * Looking for test storage... 00:08:11.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.102 04:23:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.102 04:23:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.102 04:23:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.361 04:23:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.361 04:23:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.361 04:23:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.361 04:23:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.361 04:23:14 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.361 04:23:14 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.361 04:23:14 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.361 04:23:14 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.361 04:23:14 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.361 04:23:14 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.361 04:23:14 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.361 04:23:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.361 04:23:14 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.361 04:23:14 -- scripts/common.sh@344 -- # : 1 00:08:11.361 04:23:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.361 04:23:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.361 04:23:14 -- scripts/common.sh@364 -- # decimal 1 00:08:11.361 04:23:14 -- scripts/common.sh@352 -- # local d=1 00:08:11.361 04:23:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.361 04:23:14 -- scripts/common.sh@354 -- # echo 1 00:08:11.361 04:23:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.361 04:23:14 -- scripts/common.sh@365 -- # decimal 2 00:08:11.361 04:23:14 -- scripts/common.sh@352 -- # local d=2 00:08:11.361 04:23:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.361 04:23:14 -- scripts/common.sh@354 -- # echo 2 00:08:11.361 04:23:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.361 04:23:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.361 04:23:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.361 04:23:14 -- scripts/common.sh@367 -- # return 0 00:08:11.361 04:23:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.361 04:23:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.361 --rc genhtml_branch_coverage=1 00:08:11.361 --rc genhtml_function_coverage=1 00:08:11.361 --rc genhtml_legend=1 00:08:11.361 --rc geninfo_all_blocks=1 00:08:11.361 --rc geninfo_unexecuted_blocks=1 00:08:11.361 00:08:11.361 ' 00:08:11.361 04:23:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.361 --rc genhtml_branch_coverage=1 00:08:11.361 --rc genhtml_function_coverage=1 00:08:11.361 --rc genhtml_legend=1 00:08:11.361 --rc geninfo_all_blocks=1 00:08:11.361 --rc geninfo_unexecuted_blocks=1 00:08:11.361 00:08:11.361 ' 00:08:11.361 04:23:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.361 --rc genhtml_branch_coverage=1 00:08:11.361 --rc genhtml_function_coverage=1 00:08:11.361 --rc genhtml_legend=1 00:08:11.361 --rc geninfo_all_blocks=1 00:08:11.361 --rc geninfo_unexecuted_blocks=1 00:08:11.361 00:08:11.361 ' 00:08:11.361 04:23:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.361 --rc genhtml_branch_coverage=1 00:08:11.361 --rc genhtml_function_coverage=1 00:08:11.361 --rc genhtml_legend=1 00:08:11.361 --rc geninfo_all_blocks=1 00:08:11.361 --rc geninfo_unexecuted_blocks=1 00:08:11.361 00:08:11.361 ' 00:08:11.361 04:23:14 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.361 04:23:14 -- nvmf/common.sh@7 -- # uname -s 00:08:11.361 04:23:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.361 04:23:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.361 04:23:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.361 04:23:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.361 04:23:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.361 04:23:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.361 04:23:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.361 04:23:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.361 04:23:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.361 04:23:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.361 04:23:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:08:11.361 04:23:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:08:11.361 04:23:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.361 04:23:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.361 04:23:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.361 04:23:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.361 04:23:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.361 04:23:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.361 04:23:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.361 04:23:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.361 04:23:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.361 04:23:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.361 04:23:14 -- paths/export.sh@5 -- # export PATH 00:08:11.362 04:23:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.362 04:23:14 -- nvmf/common.sh@46 -- # : 0 00:08:11.362 04:23:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:11.362 04:23:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:11.362 04:23:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:11.362 04:23:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.362 04:23:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.362 04:23:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:11.362 04:23:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:11.362 04:23:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:11.362 04:23:14 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.362 04:23:14 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:11.362 04:23:14 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:11.362 04:23:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:11.362 04:23:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.362 04:23:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:11.362 04:23:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:11.362 04:23:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:11.362 04:23:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.362 04:23:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.362 04:23:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.362 04:23:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:11.362 04:23:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:11.362 04:23:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:11.362 04:23:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:11.362 04:23:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:11.362 04:23:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:11.362 04:23:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.362 04:23:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.362 04:23:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.362 04:23:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:11.362 04:23:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.362 04:23:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.362 04:23:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.362 04:23:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.362 04:23:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.362 04:23:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.362 04:23:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.362 04:23:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.362 04:23:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:11.362 04:23:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:11.362 Cannot find device "nvmf_tgt_br" 00:08:11.362 04:23:14 -- nvmf/common.sh@154 -- # true 00:08:11.362 04:23:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.362 Cannot find device "nvmf_tgt_br2" 00:08:11.362 04:23:14 -- nvmf/common.sh@155 -- # true 00:08:11.362 04:23:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:11.362 04:23:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:11.362 Cannot find device "nvmf_tgt_br" 00:08:11.362 04:23:14 -- nvmf/common.sh@157 -- # true 00:08:11.362 04:23:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:11.362 Cannot find device "nvmf_tgt_br2" 00:08:11.362 04:23:14 -- nvmf/common.sh@158 -- # true 00:08:11.362 04:23:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:11.362 04:23:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:11.362 04:23:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.362 04:23:14 -- nvmf/common.sh@161 -- # true 00:08:11.362 04:23:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.362 04:23:14 -- nvmf/common.sh@162 -- # true 00:08:11.362 04:23:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.362 04:23:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.362 04:23:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.362 04:23:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.362 04:23:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.621 04:23:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.621 04:23:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.621 04:23:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.621 04:23:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.621 04:23:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:11.621 04:23:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:11.621 04:23:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:11.621 04:23:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:11.621 04:23:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.621 04:23:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.621 04:23:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.621 04:23:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:11.621 04:23:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:11.621 04:23:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.621 04:23:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.621 04:23:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.621 04:23:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.621 04:23:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.621 04:23:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:11.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:11.621 00:08:11.621 --- 10.0.0.2 ping statistics --- 00:08:11.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.621 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:11.621 04:23:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:11.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:11.621 00:08:11.621 --- 10.0.0.3 ping statistics --- 00:08:11.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.621 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:11.621 04:23:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:11.621 00:08:11.621 --- 10.0.0.1 ping statistics --- 00:08:11.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.621 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:11.621 04:23:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.621 04:23:14 -- nvmf/common.sh@421 -- # return 0 00:08:11.621 04:23:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:11.621 04:23:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.621 04:23:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:11.621 04:23:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:11.621 04:23:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.621 04:23:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:11.621 04:23:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:11.621 04:23:14 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:11.621 04:23:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.621 04:23:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.621 04:23:14 -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 04:23:14 -- nvmf/common.sh@469 -- # nvmfpid=60742 00:08:11.621 04:23:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:11.621 04:23:14 -- nvmf/common.sh@470 -- # waitforlisten 60742 00:08:11.621 04:23:14 -- common/autotest_common.sh@829 -- # '[' -z 60742 ']' 00:08:11.621 04:23:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.621 04:23:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.621 04:23:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.621 04:23:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.621 04:23:14 -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 [2024-12-07 04:23:14.825445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.621 [2024-12-07 04:23:14.826294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.880 [2024-12-07 04:23:14.964444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.880 [2024-12-07 04:23:15.016963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.880 [2024-12-07 04:23:15.017107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.880 [2024-12-07 04:23:15.017120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.880 [2024-12-07 04:23:15.017129] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.880 [2024-12-07 04:23:15.017157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.813 04:23:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.813 04:23:15 -- common/autotest_common.sh@862 -- # return 0 00:08:12.813 04:23:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:12.813 04:23:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.813 04:23:15 -- common/autotest_common.sh@10 -- # set +x 00:08:12.813 04:23:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.813 04:23:15 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:12.813 [2024-12-07 04:23:16.018313] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.813 04:23:16 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:12.813 04:23:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.813 04:23:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.813 04:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:12.813 ************************************ 00:08:12.813 START TEST lvs_grow_clean 00:08:12.813 ************************************ 00:08:12.813 04:23:16 -- common/autotest_common.sh@1114 -- # lvs_grow 00:08:12.813 04:23:16 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:13.070 04:23:16 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:13.071 04:23:16 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:13.637 04:23:16 -- target/nvmf_lvs_grow.sh@28 -- # lvs=075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:13.637 04:23:16 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:13.637 04:23:16 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:13.637 04:23:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:13.637 04:23:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:13.637 04:23:16 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf lvol 150 00:08:13.895 04:23:17 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1665465f-b46f-4807-b03d-17e25db49967 00:08:13.895 04:23:17 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.895 04:23:17 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:14.153 [2024-12-07 04:23:17.348497] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:14.153 [2024-12-07 04:23:17.348586] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:14.153 true 00:08:14.154 04:23:17 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:14.154 04:23:17 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:14.412 04:23:17 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:14.412 04:23:17 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:14.670 04:23:17 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1665465f-b46f-4807-b03d-17e25db49967 00:08:14.928 04:23:18 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.186 [2024-12-07 04:23:18.253750] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.186 04:23:18 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.443 04:23:18 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60819 00:08:15.443 04:23:18 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:15.443 04:23:18 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.443 04:23:18 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60819 /var/tmp/bdevperf.sock 00:08:15.443 04:23:18 -- common/autotest_common.sh@829 -- # '[' -z 60819 ']' 00:08:15.443 04:23:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.443 04:23:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.443 04:23:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.443 04:23:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.443 04:23:18 -- common/autotest_common.sh@10 -- # set +x 00:08:15.443 [2024-12-07 04:23:18.588846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.444 [2024-12-07 04:23:18.589256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60819 ] 00:08:15.701 [2024-12-07 04:23:18.730635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.702 [2024-12-07 04:23:18.800628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.268 04:23:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.268 04:23:19 -- common/autotest_common.sh@862 -- # return 0 00:08:16.268 04:23:19 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:16.526 Nvme0n1 00:08:16.526 04:23:19 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:16.840 [ 00:08:16.840 { 00:08:16.840 "name": "Nvme0n1", 00:08:16.840 "aliases": [ 00:08:16.841 "1665465f-b46f-4807-b03d-17e25db49967" 00:08:16.841 ], 00:08:16.841 "product_name": "NVMe disk", 00:08:16.841 "block_size": 4096, 00:08:16.841 "num_blocks": 38912, 00:08:16.841 "uuid": "1665465f-b46f-4807-b03d-17e25db49967", 00:08:16.841 "assigned_rate_limits": { 00:08:16.841 "rw_ios_per_sec": 0, 00:08:16.841 "rw_mbytes_per_sec": 0, 00:08:16.841 "r_mbytes_per_sec": 0, 00:08:16.841 "w_mbytes_per_sec": 0 00:08:16.841 }, 00:08:16.841 "claimed": false, 00:08:16.841 "zoned": false, 00:08:16.841 "supported_io_types": { 00:08:16.841 "read": true, 00:08:16.841 "write": true, 00:08:16.841 "unmap": true, 00:08:16.841 "write_zeroes": true, 00:08:16.841 "flush": true, 00:08:16.841 "reset": true, 00:08:16.841 "compare": true, 00:08:16.841 "compare_and_write": true, 00:08:16.841 "abort": true, 00:08:16.841 "nvme_admin": true, 00:08:16.841 "nvme_io": true 00:08:16.841 }, 00:08:16.841 "driver_specific": { 00:08:16.841 "nvme": [ 00:08:16.841 { 00:08:16.841 "trid": { 00:08:16.841 "trtype": "TCP", 00:08:16.841 "adrfam": "IPv4", 00:08:16.841 "traddr": "10.0.0.2", 00:08:16.841 "trsvcid": "4420", 00:08:16.841 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:16.841 }, 00:08:16.841 "ctrlr_data": { 00:08:16.841 "cntlid": 1, 00:08:16.841 "vendor_id": "0x8086", 00:08:16.841 "model_number": "SPDK bdev Controller", 00:08:16.841 "serial_number": "SPDK0", 00:08:16.841 "firmware_revision": "24.01.1", 00:08:16.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.841 "oacs": { 00:08:16.841 "security": 0, 00:08:16.841 "format": 0, 00:08:16.841 "firmware": 0, 00:08:16.841 "ns_manage": 0 00:08:16.841 }, 00:08:16.841 "multi_ctrlr": true, 00:08:16.841 "ana_reporting": false 00:08:16.841 }, 00:08:16.841 "vs": { 00:08:16.841 "nvme_version": "1.3" 00:08:16.841 }, 00:08:16.841 "ns_data": { 00:08:16.841 "id": 1, 00:08:16.841 "can_share": true 00:08:16.841 } 00:08:16.841 } 00:08:16.841 ], 00:08:16.841 "mp_policy": "active_passive" 00:08:16.841 } 00:08:16.841 } 00:08:16.841 ] 00:08:16.841 04:23:20 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:16.841 04:23:20 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60848 00:08:16.841 04:23:20 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:17.100 Running I/O for 10 seconds... 00:08:18.034 Latency(us) 00:08:18.034 [2024-12-07T04:23:21.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.034 [2024-12-07T04:23:21.274Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.034 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:18.034 [2024-12-07T04:23:21.274Z] =================================================================================================================== 00:08:18.034 [2024-12-07T04:23:21.274Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:18.034 00:08:18.969 04:23:22 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:18.969 [2024-12-07T04:23:22.209Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.969 Nvme0n1 : 2.00 6452.00 25.20 0.00 0.00 0.00 0.00 0.00 00:08:18.969 [2024-12-07T04:23:22.209Z] =================================================================================================================== 00:08:18.969 [2024-12-07T04:23:22.210Z] Total : 6452.00 25.20 0.00 0.00 0.00 0.00 0.00 00:08:18.970 00:08:19.237 true 00:08:19.237 04:23:22 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:19.237 04:23:22 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:19.503 04:23:22 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:19.503 04:23:22 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:19.503 04:23:22 -- target/nvmf_lvs_grow.sh@65 -- # wait 60848 00:08:20.069 [2024-12-07T04:23:23.310Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.070 Nvme0n1 : 3.00 6460.33 25.24 0.00 0.00 0.00 0.00 0.00 00:08:20.070 [2024-12-07T04:23:23.310Z] =================================================================================================================== 00:08:20.070 [2024-12-07T04:23:23.310Z] Total : 6460.33 25.24 0.00 0.00 0.00 0.00 0.00 00:08:20.070 00:08:21.006 [2024-12-07T04:23:24.247Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.007 Nvme0n1 : 4.00 6442.75 25.17 0.00 0.00 0.00 0.00 0.00 00:08:21.007 [2024-12-07T04:23:24.247Z] =================================================================================================================== 00:08:21.007 [2024-12-07T04:23:24.247Z] Total : 6442.75 25.17 0.00 0.00 0.00 0.00 0.00 00:08:21.007 00:08:21.944 [2024-12-07T04:23:25.184Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.944 Nvme0n1 : 5.00 6424.20 25.09 0.00 0.00 0.00 0.00 0.00 00:08:21.944 [2024-12-07T04:23:25.184Z] =================================================================================================================== 00:08:21.944 [2024-12-07T04:23:25.184Z] Total : 6424.20 25.09 0.00 0.00 0.00 0.00 0.00 00:08:21.944 00:08:23.318 [2024-12-07T04:23:26.558Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.318 Nvme0n1 : 6.00 6433.00 25.13 0.00 0.00 0.00 0.00 0.00 00:08:23.318 [2024-12-07T04:23:26.558Z] =================================================================================================================== 00:08:23.318 [2024-12-07T04:23:26.558Z] Total : 6433.00 25.13 0.00 0.00 0.00 0.00 0.00 00:08:23.318 00:08:24.253 [2024-12-07T04:23:27.493Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.253 Nvme0n1 : 7.00 6421.14 25.08 0.00 0.00 0.00 0.00 0.00 00:08:24.253 [2024-12-07T04:23:27.493Z] =================================================================================================================== 00:08:24.253 [2024-12-07T04:23:27.493Z] Total : 6421.14 25.08 0.00 0.00 0.00 0.00 0.00 00:08:24.253 00:08:25.189 [2024-12-07T04:23:28.429Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.189 Nvme0n1 : 8.00 6412.25 25.05 0.00 0.00 0.00 0.00 0.00 00:08:25.189 [2024-12-07T04:23:28.429Z] =================================================================================================================== 00:08:25.189 [2024-12-07T04:23:28.429Z] Total : 6412.25 25.05 0.00 0.00 0.00 0.00 0.00 00:08:25.189 00:08:26.127 [2024-12-07T04:23:29.367Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.127 Nvme0n1 : 9.00 6405.33 25.02 0.00 0.00 0.00 0.00 0.00 00:08:26.127 [2024-12-07T04:23:29.367Z] =================================================================================================================== 00:08:26.127 [2024-12-07T04:23:29.367Z] Total : 6405.33 25.02 0.00 0.00 0.00 0.00 0.00 00:08:26.127 00:08:27.063 [2024-12-07T04:23:30.303Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.063 Nvme0n1 : 10.00 6387.10 24.95 0.00 0.00 0.00 0.00 0.00 00:08:27.063 [2024-12-07T04:23:30.303Z] =================================================================================================================== 00:08:27.063 [2024-12-07T04:23:30.303Z] Total : 6387.10 24.95 0.00 0.00 0.00 0.00 0.00 00:08:27.063 00:08:27.063 00:08:27.063 Latency(us) 00:08:27.063 [2024-12-07T04:23:30.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.063 [2024-12-07T04:23:30.303Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.063 Nvme0n1 : 10.01 6390.58 24.96 0.00 0.00 20024.57 5213.09 73400.32 00:08:27.063 [2024-12-07T04:23:30.303Z] =================================================================================================================== 00:08:27.063 [2024-12-07T04:23:30.303Z] Total : 6390.58 24.96 0.00 0.00 20024.57 5213.09 73400.32 00:08:27.063 0 00:08:27.063 04:23:30 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60819 00:08:27.063 04:23:30 -- common/autotest_common.sh@936 -- # '[' -z 60819 ']' 00:08:27.063 04:23:30 -- common/autotest_common.sh@940 -- # kill -0 60819 00:08:27.063 04:23:30 -- common/autotest_common.sh@941 -- # uname 00:08:27.063 04:23:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:27.063 04:23:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60819 00:08:27.063 04:23:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:27.063 04:23:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:27.063 04:23:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60819' 00:08:27.063 killing process with pid 60819 00:08:27.063 04:23:30 -- common/autotest_common.sh@955 -- # kill 60819 00:08:27.063 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.063 00:08:27.063 Latency(us) 00:08:27.063 [2024-12-07T04:23:30.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.063 [2024-12-07T04:23:30.303Z] =================================================================================================================== 00:08:27.063 [2024-12-07T04:23:30.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.063 04:23:30 -- common/autotest_common.sh@960 -- # wait 60819 00:08:27.322 04:23:30 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:27.580 04:23:30 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:27.581 04:23:30 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:27.839 04:23:30 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:27.839 04:23:30 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:27.839 04:23:30 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.098 [2024-12-07 04:23:31.226310] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:28.098 04:23:31 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:28.098 04:23:31 -- common/autotest_common.sh@650 -- # local es=0 00:08:28.098 04:23:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:28.098 04:23:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.098 04:23:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.098 04:23:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.098 04:23:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.098 04:23:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.098 04:23:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.098 04:23:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.098 04:23:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:28.098 04:23:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:28.357 request: 00:08:28.357 { 00:08:28.357 "uuid": "075eee4f-7abb-47ef-ad2c-0613c6c43baf", 00:08:28.358 "method": "bdev_lvol_get_lvstores", 00:08:28.358 "req_id": 1 00:08:28.358 } 00:08:28.358 Got JSON-RPC error response 00:08:28.358 response: 00:08:28.358 { 00:08:28.358 "code": -19, 00:08:28.358 "message": "No such device" 00:08:28.358 } 00:08:28.358 04:23:31 -- common/autotest_common.sh@653 -- # es=1 00:08:28.358 04:23:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.358 04:23:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.358 04:23:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.358 04:23:31 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.617 aio_bdev 00:08:28.617 04:23:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1665465f-b46f-4807-b03d-17e25db49967 00:08:28.617 04:23:31 -- common/autotest_common.sh@897 -- # local bdev_name=1665465f-b46f-4807-b03d-17e25db49967 00:08:28.617 04:23:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:28.617 04:23:31 -- common/autotest_common.sh@899 -- # local i 00:08:28.617 04:23:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:28.617 04:23:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:28.617 04:23:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:28.876 04:23:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1665465f-b46f-4807-b03d-17e25db49967 -t 2000 00:08:29.136 [ 00:08:29.136 { 00:08:29.136 "name": "1665465f-b46f-4807-b03d-17e25db49967", 00:08:29.136 "aliases": [ 00:08:29.136 "lvs/lvol" 00:08:29.136 ], 00:08:29.136 "product_name": "Logical Volume", 00:08:29.136 "block_size": 4096, 00:08:29.136 "num_blocks": 38912, 00:08:29.136 "uuid": "1665465f-b46f-4807-b03d-17e25db49967", 00:08:29.136 "assigned_rate_limits": { 00:08:29.136 "rw_ios_per_sec": 0, 00:08:29.136 "rw_mbytes_per_sec": 0, 00:08:29.136 "r_mbytes_per_sec": 0, 00:08:29.136 "w_mbytes_per_sec": 0 00:08:29.136 }, 00:08:29.136 "claimed": false, 00:08:29.136 "zoned": false, 00:08:29.136 "supported_io_types": { 00:08:29.136 "read": true, 00:08:29.136 "write": true, 00:08:29.136 "unmap": true, 00:08:29.136 "write_zeroes": true, 00:08:29.136 "flush": false, 00:08:29.136 "reset": true, 00:08:29.136 "compare": false, 00:08:29.136 "compare_and_write": false, 00:08:29.136 "abort": false, 00:08:29.136 "nvme_admin": false, 00:08:29.136 "nvme_io": false 00:08:29.136 }, 00:08:29.136 "driver_specific": { 00:08:29.136 "lvol": { 00:08:29.136 "lvol_store_uuid": "075eee4f-7abb-47ef-ad2c-0613c6c43baf", 00:08:29.136 "base_bdev": "aio_bdev", 00:08:29.136 "thin_provision": false, 00:08:29.136 "snapshot": false, 00:08:29.136 "clone": false, 00:08:29.136 "esnap_clone": false 00:08:29.136 } 00:08:29.136 } 00:08:29.136 } 00:08:29.136 ] 00:08:29.136 04:23:32 -- common/autotest_common.sh@905 -- # return 0 00:08:29.136 04:23:32 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:29.136 04:23:32 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:29.395 04:23:32 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:29.395 04:23:32 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:29.395 04:23:32 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:29.655 04:23:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:29.655 04:23:32 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1665465f-b46f-4807-b03d-17e25db49967 00:08:29.914 04:23:32 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 075eee4f-7abb-47ef-ad2c-0613c6c43baf 00:08:29.914 04:23:33 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.173 04:23:33 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.740 ************************************ 00:08:30.740 END TEST lvs_grow_clean 00:08:30.740 ************************************ 00:08:30.740 00:08:30.740 real 0m17.681s 00:08:30.740 user 0m16.844s 00:08:30.740 sys 0m2.253s 00:08:30.740 04:23:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:30.740 04:23:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:30.740 04:23:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:30.740 04:23:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.740 04:23:33 -- common/autotest_common.sh@10 -- # set +x 00:08:30.740 ************************************ 00:08:30.740 START TEST lvs_grow_dirty 00:08:30.740 ************************************ 00:08:30.740 04:23:33 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.740 04:23:33 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.999 04:23:34 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:30.999 04:23:34 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:31.258 04:23:34 -- target/nvmf_lvs_grow.sh@28 -- # lvs=04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:31.258 04:23:34 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:31.258 04:23:34 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:31.517 04:23:34 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:31.517 04:23:34 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:31.517 04:23:34 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf lvol 150 00:08:31.776 04:23:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=aed21b46-566b-4f02-b5c4-f28c6120f8cd 00:08:31.776 04:23:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:31.776 04:23:34 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.034 [2024-12-07 04:23:35.072792] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.034 [2024-12-07 04:23:35.072876] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.034 true 00:08:32.034 04:23:35 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:32.034 04:23:35 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.296 04:23:35 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.296 04:23:35 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.561 04:23:35 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aed21b46-566b-4f02-b5c4-f28c6120f8cd 00:08:32.819 04:23:35 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:32.819 04:23:36 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.077 04:23:36 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61088 00:08:33.077 04:23:36 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.077 04:23:36 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.077 04:23:36 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61088 /var/tmp/bdevperf.sock 00:08:33.077 04:23:36 -- common/autotest_common.sh@829 -- # '[' -z 61088 ']' 00:08:33.077 04:23:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.077 04:23:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.077 04:23:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.077 04:23:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.077 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:08:33.077 [2024-12-07 04:23:36.304589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.077 [2024-12-07 04:23:36.304940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61088 ] 00:08:33.334 [2024-12-07 04:23:36.446458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.334 [2024-12-07 04:23:36.512951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.268 04:23:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.268 04:23:37 -- common/autotest_common.sh@862 -- # return 0 00:08:34.268 04:23:37 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.526 Nvme0n1 00:08:34.526 04:23:37 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.785 [ 00:08:34.785 { 00:08:34.785 "name": "Nvme0n1", 00:08:34.785 "aliases": [ 00:08:34.785 "aed21b46-566b-4f02-b5c4-f28c6120f8cd" 00:08:34.785 ], 00:08:34.785 "product_name": "NVMe disk", 00:08:34.785 "block_size": 4096, 00:08:34.785 "num_blocks": 38912, 00:08:34.785 "uuid": "aed21b46-566b-4f02-b5c4-f28c6120f8cd", 00:08:34.785 "assigned_rate_limits": { 00:08:34.785 "rw_ios_per_sec": 0, 00:08:34.785 "rw_mbytes_per_sec": 0, 00:08:34.785 "r_mbytes_per_sec": 0, 00:08:34.785 "w_mbytes_per_sec": 0 00:08:34.785 }, 00:08:34.785 "claimed": false, 00:08:34.785 "zoned": false, 00:08:34.785 "supported_io_types": { 00:08:34.785 "read": true, 00:08:34.785 "write": true, 00:08:34.785 "unmap": true, 00:08:34.785 "write_zeroes": true, 00:08:34.785 "flush": true, 00:08:34.785 "reset": true, 00:08:34.785 "compare": true, 00:08:34.785 "compare_and_write": true, 00:08:34.785 "abort": true, 00:08:34.785 "nvme_admin": true, 00:08:34.785 "nvme_io": true 00:08:34.785 }, 00:08:34.785 "driver_specific": { 00:08:34.785 "nvme": [ 00:08:34.785 { 00:08:34.785 "trid": { 00:08:34.785 "trtype": "TCP", 00:08:34.785 "adrfam": "IPv4", 00:08:34.785 "traddr": "10.0.0.2", 00:08:34.785 "trsvcid": "4420", 00:08:34.785 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.785 }, 00:08:34.785 "ctrlr_data": { 00:08:34.785 "cntlid": 1, 00:08:34.785 "vendor_id": "0x8086", 00:08:34.785 "model_number": "SPDK bdev Controller", 00:08:34.785 "serial_number": "SPDK0", 00:08:34.785 "firmware_revision": "24.01.1", 00:08:34.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.785 "oacs": { 00:08:34.785 "security": 0, 00:08:34.785 "format": 0, 00:08:34.785 "firmware": 0, 00:08:34.785 "ns_manage": 0 00:08:34.785 }, 00:08:34.785 "multi_ctrlr": true, 00:08:34.785 "ana_reporting": false 00:08:34.785 }, 00:08:34.785 "vs": { 00:08:34.785 "nvme_version": "1.3" 00:08:34.785 }, 00:08:34.785 "ns_data": { 00:08:34.785 "id": 1, 00:08:34.785 "can_share": true 00:08:34.785 } 00:08:34.785 } 00:08:34.785 ], 00:08:34.785 "mp_policy": "active_passive" 00:08:34.785 } 00:08:34.785 } 00:08:34.785 ] 00:08:34.785 04:23:37 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61106 00:08:34.785 04:23:37 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.785 04:23:37 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.785 Running I/O for 10 seconds... 00:08:35.722 Latency(us) 00:08:35.722 [2024-12-07T04:23:38.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.722 [2024-12-07T04:23:38.962Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.722 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:35.722 [2024-12-07T04:23:38.962Z] =================================================================================================================== 00:08:35.722 [2024-12-07T04:23:38.962Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:35.722 00:08:36.657 04:23:39 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:36.916 [2024-12-07T04:23:40.156Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.916 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:36.916 [2024-12-07T04:23:40.156Z] =================================================================================================================== 00:08:36.916 [2024-12-07T04:23:40.156Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:08:36.916 00:08:37.175 true 00:08:37.175 04:23:40 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:37.175 04:23:40 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.433 04:23:40 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.433 04:23:40 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.433 04:23:40 -- target/nvmf_lvs_grow.sh@65 -- # wait 61106 00:08:37.999 [2024-12-07T04:23:41.239Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.999 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:08:37.999 [2024-12-07T04:23:41.239Z] =================================================================================================================== 00:08:37.999 [2024-12-07T04:23:41.239Z] Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:08:37.999 00:08:38.935 [2024-12-07T04:23:42.175Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.935 Nvme0n1 : 4.00 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:08:38.935 [2024-12-07T04:23:42.175Z] =================================================================================================================== 00:08:38.935 [2024-12-07T04:23:42.175Z] Total : 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:08:38.935 00:08:39.871 [2024-12-07T04:23:43.111Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.871 Nvme0n1 : 5.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:39.871 [2024-12-07T04:23:43.111Z] =================================================================================================================== 00:08:39.871 [2024-12-07T04:23:43.111Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:39.871 00:08:40.806 [2024-12-07T04:23:44.046Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.806 Nvme0n1 : 6.00 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:08:40.806 [2024-12-07T04:23:44.046Z] =================================================================================================================== 00:08:40.806 [2024-12-07T04:23:44.046Z] Total : 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:08:40.806 00:08:41.743 [2024-12-07T04:23:44.983Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.743 Nvme0n1 : 7.00 6585.86 25.73 0.00 0.00 0.00 0.00 0.00 00:08:41.743 [2024-12-07T04:23:44.983Z] =================================================================================================================== 00:08:41.743 [2024-12-07T04:23:44.983Z] Total : 6585.86 25.73 0.00 0.00 0.00 0.00 0.00 00:08:41.743 00:08:43.121 [2024-12-07T04:23:46.361Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.121 Nvme0n1 : 8.00 6556.38 25.61 0.00 0.00 0.00 0.00 0.00 00:08:43.121 [2024-12-07T04:23:46.361Z] =================================================================================================================== 00:08:43.121 [2024-12-07T04:23:46.361Z] Total : 6556.38 25.61 0.00 0.00 0.00 0.00 0.00 00:08:43.121 00:08:44.057 [2024-12-07T04:23:47.297Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.057 Nvme0n1 : 9.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:44.057 [2024-12-07T04:23:47.297Z] =================================================================================================================== 00:08:44.057 [2024-12-07T04:23:47.297Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:44.057 00:08:44.994 [2024-12-07T04:23:48.234Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.994 Nvme0n1 : 10.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:44.994 [2024-12-07T04:23:48.234Z] =================================================================================================================== 00:08:44.994 [2024-12-07T04:23:48.234Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:44.994 00:08:44.994 00:08:44.994 Latency(us) 00:08:44.994 [2024-12-07T04:23:48.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.994 [2024-12-07T04:23:48.234Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.994 Nvme0n1 : 10.00 6423.69 25.09 0.00 0.00 19921.23 16205.27 188743.68 00:08:44.994 [2024-12-07T04:23:48.234Z] =================================================================================================================== 00:08:44.994 [2024-12-07T04:23:48.234Z] Total : 6423.69 25.09 0.00 0.00 19921.23 16205.27 188743.68 00:08:44.994 0 00:08:44.994 04:23:47 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61088 00:08:44.994 04:23:47 -- common/autotest_common.sh@936 -- # '[' -z 61088 ']' 00:08:44.994 04:23:47 -- common/autotest_common.sh@940 -- # kill -0 61088 00:08:44.994 04:23:47 -- common/autotest_common.sh@941 -- # uname 00:08:44.994 04:23:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:44.994 04:23:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61088 00:08:44.994 killing process with pid 61088 00:08:44.994 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.994 00:08:44.994 Latency(us) 00:08:44.994 [2024-12-07T04:23:48.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.994 [2024-12-07T04:23:48.234Z] =================================================================================================================== 00:08:44.994 [2024-12-07T04:23:48.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.994 04:23:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:44.994 04:23:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:44.994 04:23:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61088' 00:08:44.994 04:23:47 -- common/autotest_common.sh@955 -- # kill 61088 00:08:44.994 04:23:47 -- common/autotest_common.sh@960 -- # wait 61088 00:08:44.994 04:23:48 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.252 04:23:48 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:45.252 04:23:48 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:45.820 04:23:48 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:45.820 04:23:48 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:08:45.820 04:23:48 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60742 00:08:45.820 04:23:48 -- target/nvmf_lvs_grow.sh@74 -- # wait 60742 00:08:45.820 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60742 Killed "${NVMF_APP[@]}" "$@" 00:08:45.820 04:23:48 -- target/nvmf_lvs_grow.sh@74 -- # true 00:08:45.820 04:23:48 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:08:45.820 04:23:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:45.820 04:23:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.820 04:23:48 -- common/autotest_common.sh@10 -- # set +x 00:08:45.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.821 04:23:48 -- nvmf/common.sh@469 -- # nvmfpid=61238 00:08:45.821 04:23:48 -- nvmf/common.sh@470 -- # waitforlisten 61238 00:08:45.821 04:23:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:45.821 04:23:48 -- common/autotest_common.sh@829 -- # '[' -z 61238 ']' 00:08:45.821 04:23:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.821 04:23:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.821 04:23:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.821 04:23:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.821 04:23:48 -- common/autotest_common.sh@10 -- # set +x 00:08:45.821 [2024-12-07 04:23:48.841760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.821 [2024-12-07 04:23:48.842143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.821 [2024-12-07 04:23:48.984190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.821 [2024-12-07 04:23:49.037122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.821 [2024-12-07 04:23:49.037533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.821 [2024-12-07 04:23:49.037733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.821 [2024-12-07 04:23:49.037877] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.821 [2024-12-07 04:23:49.037995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.781 04:23:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.781 04:23:49 -- common/autotest_common.sh@862 -- # return 0 00:08:46.781 04:23:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.781 04:23:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.781 04:23:49 -- common/autotest_common.sh@10 -- # set +x 00:08:46.781 04:23:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.781 04:23:49 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.039 [2024-12-07 04:23:50.081762] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:47.039 [2024-12-07 04:23:50.082126] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:47.039 [2024-12-07 04:23:50.082485] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:47.039 04:23:50 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:08:47.039 04:23:50 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev aed21b46-566b-4f02-b5c4-f28c6120f8cd 00:08:47.039 04:23:50 -- common/autotest_common.sh@897 -- # local bdev_name=aed21b46-566b-4f02-b5c4-f28c6120f8cd 00:08:47.039 04:23:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:47.039 04:23:50 -- common/autotest_common.sh@899 -- # local i 00:08:47.039 04:23:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:47.039 04:23:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:47.039 04:23:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.297 04:23:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aed21b46-566b-4f02-b5c4-f28c6120f8cd -t 2000 00:08:47.555 [ 00:08:47.555 { 00:08:47.555 "name": "aed21b46-566b-4f02-b5c4-f28c6120f8cd", 00:08:47.555 "aliases": [ 00:08:47.555 "lvs/lvol" 00:08:47.555 ], 00:08:47.555 "product_name": "Logical Volume", 00:08:47.555 "block_size": 4096, 00:08:47.555 "num_blocks": 38912, 00:08:47.555 "uuid": "aed21b46-566b-4f02-b5c4-f28c6120f8cd", 00:08:47.555 "assigned_rate_limits": { 00:08:47.555 "rw_ios_per_sec": 0, 00:08:47.555 "rw_mbytes_per_sec": 0, 00:08:47.555 "r_mbytes_per_sec": 0, 00:08:47.555 "w_mbytes_per_sec": 0 00:08:47.555 }, 00:08:47.555 "claimed": false, 00:08:47.555 "zoned": false, 00:08:47.555 "supported_io_types": { 00:08:47.555 "read": true, 00:08:47.555 "write": true, 00:08:47.555 "unmap": true, 00:08:47.555 "write_zeroes": true, 00:08:47.555 "flush": false, 00:08:47.555 "reset": true, 00:08:47.555 "compare": false, 00:08:47.555 "compare_and_write": false, 00:08:47.555 "abort": false, 00:08:47.555 "nvme_admin": false, 00:08:47.555 "nvme_io": false 00:08:47.555 }, 00:08:47.555 "driver_specific": { 00:08:47.555 "lvol": { 00:08:47.555 "lvol_store_uuid": "04d4a598-c79b-4ab0-a2b0-a80a9470decf", 00:08:47.555 "base_bdev": "aio_bdev", 00:08:47.555 "thin_provision": false, 00:08:47.555 "snapshot": false, 00:08:47.555 "clone": false, 00:08:47.555 "esnap_clone": false 00:08:47.555 } 00:08:47.555 } 00:08:47.555 } 00:08:47.555 ] 00:08:47.555 04:23:50 -- common/autotest_common.sh@905 -- # return 0 00:08:47.555 04:23:50 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:47.555 04:23:50 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:08:47.814 04:23:50 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:08:47.814 04:23:50 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:08:47.814 04:23:50 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:48.074 04:23:51 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:08:48.074 04:23:51 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.074 [2024-12-07 04:23:51.279898] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:48.074 04:23:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:48.074 04:23:51 -- common/autotest_common.sh@650 -- # local es=0 00:08:48.074 04:23:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:48.074 04:23:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:48.074 04:23:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.074 04:23:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:48.332 04:23:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.332 04:23:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:48.332 04:23:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.332 04:23:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:48.332 04:23:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:48.332 04:23:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:48.332 request: 00:08:48.332 { 00:08:48.332 "uuid": "04d4a598-c79b-4ab0-a2b0-a80a9470decf", 00:08:48.332 "method": "bdev_lvol_get_lvstores", 00:08:48.332 "req_id": 1 00:08:48.332 } 00:08:48.332 Got JSON-RPC error response 00:08:48.332 response: 00:08:48.332 { 00:08:48.332 "code": -19, 00:08:48.332 "message": "No such device" 00:08:48.332 } 00:08:48.591 04:23:51 -- common/autotest_common.sh@653 -- # es=1 00:08:48.591 04:23:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.591 04:23:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.591 04:23:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.591 04:23:51 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.591 aio_bdev 00:08:48.591 04:23:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev aed21b46-566b-4f02-b5c4-f28c6120f8cd 00:08:48.591 04:23:51 -- common/autotest_common.sh@897 -- # local bdev_name=aed21b46-566b-4f02-b5c4-f28c6120f8cd 00:08:48.592 04:23:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:48.592 04:23:51 -- common/autotest_common.sh@899 -- # local i 00:08:48.592 04:23:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:48.592 04:23:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:48.592 04:23:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.851 04:23:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aed21b46-566b-4f02-b5c4-f28c6120f8cd -t 2000 00:08:49.109 [ 00:08:49.109 { 00:08:49.109 "name": "aed21b46-566b-4f02-b5c4-f28c6120f8cd", 00:08:49.109 "aliases": [ 00:08:49.109 "lvs/lvol" 00:08:49.109 ], 00:08:49.109 "product_name": "Logical Volume", 00:08:49.109 "block_size": 4096, 00:08:49.109 "num_blocks": 38912, 00:08:49.109 "uuid": "aed21b46-566b-4f02-b5c4-f28c6120f8cd", 00:08:49.109 "assigned_rate_limits": { 00:08:49.109 "rw_ios_per_sec": 0, 00:08:49.109 "rw_mbytes_per_sec": 0, 00:08:49.109 "r_mbytes_per_sec": 0, 00:08:49.109 "w_mbytes_per_sec": 0 00:08:49.109 }, 00:08:49.109 "claimed": false, 00:08:49.109 "zoned": false, 00:08:49.109 "supported_io_types": { 00:08:49.109 "read": true, 00:08:49.109 "write": true, 00:08:49.109 "unmap": true, 00:08:49.109 "write_zeroes": true, 00:08:49.109 "flush": false, 00:08:49.109 "reset": true, 00:08:49.109 "compare": false, 00:08:49.109 "compare_and_write": false, 00:08:49.109 "abort": false, 00:08:49.109 "nvme_admin": false, 00:08:49.109 "nvme_io": false 00:08:49.109 }, 00:08:49.109 "driver_specific": { 00:08:49.109 "lvol": { 00:08:49.109 "lvol_store_uuid": "04d4a598-c79b-4ab0-a2b0-a80a9470decf", 00:08:49.109 "base_bdev": "aio_bdev", 00:08:49.109 "thin_provision": false, 00:08:49.109 "snapshot": false, 00:08:49.109 "clone": false, 00:08:49.109 "esnap_clone": false 00:08:49.109 } 00:08:49.109 } 00:08:49.109 } 00:08:49.109 ] 00:08:49.109 04:23:52 -- common/autotest_common.sh@905 -- # return 0 00:08:49.109 04:23:52 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:49.109 04:23:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:49.369 04:23:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:49.369 04:23:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:49.369 04:23:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:49.629 04:23:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:49.629 04:23:52 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete aed21b46-566b-4f02-b5c4-f28c6120f8cd 00:08:49.889 04:23:52 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04d4a598-c79b-4ab0-a2b0-a80a9470decf 00:08:50.149 04:23:53 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.409 04:23:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.669 ************************************ 00:08:50.669 END TEST lvs_grow_dirty 00:08:50.669 ************************************ 00:08:50.669 00:08:50.669 real 0m20.055s 00:08:50.669 user 0m40.375s 00:08:50.669 sys 0m9.084s 00:08:50.669 04:23:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.669 04:23:53 -- common/autotest_common.sh@10 -- # set +x 00:08:50.669 04:23:53 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:50.669 04:23:53 -- common/autotest_common.sh@806 -- # type=--id 00:08:50.669 04:23:53 -- common/autotest_common.sh@807 -- # id=0 00:08:50.669 04:23:53 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:50.669 04:23:53 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:50.669 04:23:53 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:50.669 04:23:53 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:50.669 04:23:53 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:50.669 04:23:53 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:50.669 nvmf_trace.0 00:08:50.928 04:23:53 -- common/autotest_common.sh@821 -- # return 0 00:08:50.928 04:23:53 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:50.928 04:23:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:50.928 04:23:53 -- nvmf/common.sh@116 -- # sync 00:08:51.864 04:23:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:51.864 04:23:54 -- nvmf/common.sh@119 -- # set +e 00:08:51.864 04:23:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:51.864 04:23:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:51.864 rmmod nvme_tcp 00:08:51.864 rmmod nvme_fabrics 00:08:51.864 rmmod nvme_keyring 00:08:51.864 04:23:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:51.864 04:23:54 -- nvmf/common.sh@123 -- # set -e 00:08:51.864 04:23:54 -- nvmf/common.sh@124 -- # return 0 00:08:51.864 04:23:54 -- nvmf/common.sh@477 -- # '[' -n 61238 ']' 00:08:51.864 04:23:54 -- nvmf/common.sh@478 -- # killprocess 61238 00:08:51.864 04:23:54 -- common/autotest_common.sh@936 -- # '[' -z 61238 ']' 00:08:51.864 04:23:54 -- common/autotest_common.sh@940 -- # kill -0 61238 00:08:51.864 04:23:54 -- common/autotest_common.sh@941 -- # uname 00:08:51.864 04:23:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:51.864 04:23:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61238 00:08:51.864 killing process with pid 61238 00:08:51.864 04:23:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:51.864 04:23:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:51.864 04:23:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61238' 00:08:51.864 04:23:54 -- common/autotest_common.sh@955 -- # kill 61238 00:08:51.864 04:23:54 -- common/autotest_common.sh@960 -- # wait 61238 00:08:52.122 04:23:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:52.122 04:23:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:52.122 04:23:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:52.122 04:23:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.122 04:23:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:52.123 04:23:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.123 04:23:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.123 04:23:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.123 04:23:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:52.123 00:08:52.123 real 0m40.946s 00:08:52.123 user 1m4.198s 00:08:52.123 sys 0m12.776s 00:08:52.123 04:23:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.123 04:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.123 ************************************ 00:08:52.123 END TEST nvmf_lvs_grow 00:08:52.123 ************************************ 00:08:52.123 04:23:55 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:52.123 04:23:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:52.123 04:23:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.123 04:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.123 ************************************ 00:08:52.123 START TEST nvmf_bdev_io_wait 00:08:52.123 ************************************ 00:08:52.123 04:23:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:52.123 * Looking for test storage... 00:08:52.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.123 04:23:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:52.123 04:23:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:52.123 04:23:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:52.381 04:23:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:52.381 04:23:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:52.381 04:23:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:52.381 04:23:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:52.381 04:23:55 -- scripts/common.sh@335 -- # IFS=.-: 00:08:52.381 04:23:55 -- scripts/common.sh@335 -- # read -ra ver1 00:08:52.381 04:23:55 -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.381 04:23:55 -- scripts/common.sh@336 -- # read -ra ver2 00:08:52.381 04:23:55 -- scripts/common.sh@337 -- # local 'op=<' 00:08:52.381 04:23:55 -- scripts/common.sh@339 -- # ver1_l=2 00:08:52.381 04:23:55 -- scripts/common.sh@340 -- # ver2_l=1 00:08:52.381 04:23:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:52.381 04:23:55 -- scripts/common.sh@343 -- # case "$op" in 00:08:52.381 04:23:55 -- scripts/common.sh@344 -- # : 1 00:08:52.381 04:23:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:52.381 04:23:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.381 04:23:55 -- scripts/common.sh@364 -- # decimal 1 00:08:52.381 04:23:55 -- scripts/common.sh@352 -- # local d=1 00:08:52.381 04:23:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.381 04:23:55 -- scripts/common.sh@354 -- # echo 1 00:08:52.381 04:23:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:52.381 04:23:55 -- scripts/common.sh@365 -- # decimal 2 00:08:52.381 04:23:55 -- scripts/common.sh@352 -- # local d=2 00:08:52.381 04:23:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.381 04:23:55 -- scripts/common.sh@354 -- # echo 2 00:08:52.381 04:23:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:52.381 04:23:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:52.381 04:23:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:52.381 04:23:55 -- scripts/common.sh@367 -- # return 0 00:08:52.381 04:23:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.381 04:23:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.381 --rc genhtml_branch_coverage=1 00:08:52.381 --rc genhtml_function_coverage=1 00:08:52.381 --rc genhtml_legend=1 00:08:52.381 --rc geninfo_all_blocks=1 00:08:52.381 --rc geninfo_unexecuted_blocks=1 00:08:52.381 00:08:52.381 ' 00:08:52.381 04:23:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.381 --rc genhtml_branch_coverage=1 00:08:52.381 --rc genhtml_function_coverage=1 00:08:52.381 --rc genhtml_legend=1 00:08:52.381 --rc geninfo_all_blocks=1 00:08:52.381 --rc geninfo_unexecuted_blocks=1 00:08:52.381 00:08:52.381 ' 00:08:52.381 04:23:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.381 --rc genhtml_branch_coverage=1 00:08:52.381 --rc genhtml_function_coverage=1 00:08:52.381 --rc genhtml_legend=1 00:08:52.381 --rc geninfo_all_blocks=1 00:08:52.381 --rc geninfo_unexecuted_blocks=1 00:08:52.381 00:08:52.381 ' 00:08:52.381 04:23:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.381 --rc genhtml_branch_coverage=1 00:08:52.381 --rc genhtml_function_coverage=1 00:08:52.381 --rc genhtml_legend=1 00:08:52.381 --rc geninfo_all_blocks=1 00:08:52.381 --rc geninfo_unexecuted_blocks=1 00:08:52.381 00:08:52.381 ' 00:08:52.381 04:23:55 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.381 04:23:55 -- nvmf/common.sh@7 -- # uname -s 00:08:52.381 04:23:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.381 04:23:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.381 04:23:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.381 04:23:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.381 04:23:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.381 04:23:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.381 04:23:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.381 04:23:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.381 04:23:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.381 04:23:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.381 04:23:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:08:52.381 04:23:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:08:52.381 04:23:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.381 04:23:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.381 04:23:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.381 04:23:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.381 04:23:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.381 04:23:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.381 04:23:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.381 04:23:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.381 04:23:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.381 04:23:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.381 04:23:55 -- paths/export.sh@5 -- # export PATH 00:08:52.381 04:23:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.381 04:23:55 -- nvmf/common.sh@46 -- # : 0 00:08:52.381 04:23:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:52.381 04:23:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:52.381 04:23:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:52.381 04:23:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.381 04:23:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.381 04:23:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:52.381 04:23:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:52.381 04:23:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:52.381 04:23:55 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.381 04:23:55 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.381 04:23:55 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:52.381 04:23:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:52.381 04:23:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.381 04:23:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:52.381 04:23:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:52.381 04:23:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:52.381 04:23:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.381 04:23:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.382 04:23:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.382 04:23:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:52.382 04:23:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:52.382 04:23:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:52.382 04:23:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:52.382 04:23:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:52.382 04:23:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:52.382 04:23:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.382 04:23:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.382 04:23:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:52.382 04:23:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:52.382 04:23:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.382 04:23:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.382 04:23:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.382 04:23:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.382 04:23:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.382 04:23:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.382 04:23:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.382 04:23:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.382 04:23:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:52.382 04:23:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:52.382 Cannot find device "nvmf_tgt_br" 00:08:52.382 04:23:55 -- nvmf/common.sh@154 -- # true 00:08:52.382 04:23:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.382 Cannot find device "nvmf_tgt_br2" 00:08:52.382 04:23:55 -- nvmf/common.sh@155 -- # true 00:08:52.382 04:23:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:52.382 04:23:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:52.382 Cannot find device "nvmf_tgt_br" 00:08:52.382 04:23:55 -- nvmf/common.sh@157 -- # true 00:08:52.382 04:23:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:52.382 Cannot find device "nvmf_tgt_br2" 00:08:52.382 04:23:55 -- nvmf/common.sh@158 -- # true 00:08:52.382 04:23:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:52.382 04:23:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:52.382 04:23:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.382 04:23:55 -- nvmf/common.sh@161 -- # true 00:08:52.382 04:23:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.382 04:23:55 -- nvmf/common.sh@162 -- # true 00:08:52.382 04:23:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:52.382 04:23:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:52.382 04:23:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:52.382 04:23:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:52.382 04:23:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:52.382 04:23:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:52.640 04:23:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:52.640 04:23:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:52.640 04:23:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:52.640 04:23:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:52.640 04:23:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:52.640 04:23:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:52.640 04:23:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:52.640 04:23:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:52.640 04:23:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:52.640 04:23:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:52.640 04:23:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:52.640 04:23:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:52.640 04:23:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:52.640 04:23:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:52.640 04:23:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:52.640 04:23:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:52.640 04:23:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:52.640 04:23:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:52.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:08:52.640 00:08:52.640 --- 10.0.0.2 ping statistics --- 00:08:52.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.640 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:52.640 04:23:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:52.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:52.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:52.640 00:08:52.640 --- 10.0.0.3 ping statistics --- 00:08:52.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.640 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:52.640 04:23:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:52.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:52.640 00:08:52.640 --- 10.0.0.1 ping statistics --- 00:08:52.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.640 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:52.640 04:23:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.640 04:23:55 -- nvmf/common.sh@421 -- # return 0 00:08:52.640 04:23:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:52.640 04:23:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.640 04:23:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:52.640 04:23:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:52.641 04:23:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.641 04:23:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:52.641 04:23:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:52.641 04:23:55 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:52.641 04:23:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.641 04:23:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.641 04:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.641 04:23:55 -- nvmf/common.sh@469 -- # nvmfpid=61568 00:08:52.641 04:23:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:52.641 04:23:55 -- nvmf/common.sh@470 -- # waitforlisten 61568 00:08:52.641 04:23:55 -- common/autotest_common.sh@829 -- # '[' -z 61568 ']' 00:08:52.641 04:23:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.641 04:23:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.641 04:23:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.641 04:23:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.641 04:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:52.641 [2024-12-07 04:23:55.815080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.641 [2024-12-07 04:23:55.815163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.900 [2024-12-07 04:23:55.948467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.900 [2024-12-07 04:23:56.003137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.900 [2024-12-07 04:23:56.003280] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.900 [2024-12-07 04:23:56.003293] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.900 [2024-12-07 04:23:56.003300] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.900 [2024-12-07 04:23:56.003943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.900 [2024-12-07 04:23:56.004112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.900 [2024-12-07 04:23:56.004267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.900 [2024-12-07 04:23:56.004273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.836 04:23:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.836 04:23:56 -- common/autotest_common.sh@862 -- # return 0 00:08:53.836 04:23:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:53.836 04:23:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 04:23:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:53.836 04:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 04:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:53.836 04:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 04:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.836 04:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 [2024-12-07 04:23:56.857784] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.836 04:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:53.836 04:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 Malloc0 00:08:53.836 04:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:53.836 04:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 04:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.836 04:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 04:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.836 04:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 04:23:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 [2024-12-07 04:23:56.918584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.836 04:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61603 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:53.836 04:23:56 -- nvmf/common.sh@520 -- # config=() 00:08:53.836 04:23:56 -- nvmf/common.sh@520 -- # local subsystem config 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@30 -- # READ_PID=61605 00:08:53.836 04:23:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:53.836 04:23:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:53.836 { 00:08:53.836 "params": { 00:08:53.836 "name": "Nvme$subsystem", 00:08:53.836 "trtype": "$TEST_TRANSPORT", 00:08:53.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.836 "adrfam": "ipv4", 00:08:53.836 "trsvcid": "$NVMF_PORT", 00:08:53.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.836 "hdgst": ${hdgst:-false}, 00:08:53.836 "ddgst": ${ddgst:-false} 00:08:53.836 }, 00:08:53.836 "method": "bdev_nvme_attach_controller" 00:08:53.836 } 00:08:53.836 EOF 00:08:53.836 )") 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61606 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:53.836 04:23:56 -- nvmf/common.sh@542 -- # cat 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61610 00:08:53.836 04:23:56 -- nvmf/common.sh@520 -- # config=() 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@35 -- # sync 00:08:53.836 04:23:56 -- nvmf/common.sh@520 -- # local subsystem config 00:08:53.836 04:23:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:53.836 04:23:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:53.836 { 00:08:53.836 "params": { 00:08:53.836 "name": "Nvme$subsystem", 00:08:53.836 "trtype": "$TEST_TRANSPORT", 00:08:53.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.836 "adrfam": "ipv4", 00:08:53.836 "trsvcid": "$NVMF_PORT", 00:08:53.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.836 "hdgst": ${hdgst:-false}, 00:08:53.836 "ddgst": ${ddgst:-false} 00:08:53.836 }, 00:08:53.836 "method": "bdev_nvme_attach_controller" 00:08:53.836 } 00:08:53.836 EOF 00:08:53.836 )") 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:53.836 04:23:56 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:53.836 04:23:56 -- nvmf/common.sh@520 -- # config=() 00:08:53.836 04:23:56 -- nvmf/common.sh@542 -- # cat 00:08:53.836 04:23:56 -- nvmf/common.sh@520 -- # local subsystem config 00:08:53.836 04:23:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:53.836 04:23:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:53.836 { 00:08:53.837 "params": { 00:08:53.837 "name": "Nvme$subsystem", 00:08:53.837 "trtype": "$TEST_TRANSPORT", 00:08:53.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.837 "adrfam": "ipv4", 00:08:53.837 "trsvcid": "$NVMF_PORT", 00:08:53.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.837 "hdgst": ${hdgst:-false}, 00:08:53.837 "ddgst": ${ddgst:-false} 00:08:53.837 }, 00:08:53.837 "method": "bdev_nvme_attach_controller" 00:08:53.837 } 00:08:53.837 EOF 00:08:53.837 )") 00:08:53.837 04:23:56 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:53.837 04:23:56 -- nvmf/common.sh@542 -- # cat 00:08:53.837 04:23:56 -- nvmf/common.sh@520 -- # config=() 00:08:53.837 04:23:56 -- nvmf/common.sh@520 -- # local subsystem config 00:08:53.837 04:23:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:53.837 04:23:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:53.837 { 00:08:53.837 "params": { 00:08:53.837 "name": "Nvme$subsystem", 00:08:53.837 "trtype": "$TEST_TRANSPORT", 00:08:53.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.837 "adrfam": "ipv4", 00:08:53.837 "trsvcid": "$NVMF_PORT", 00:08:53.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.837 "hdgst": ${hdgst:-false}, 00:08:53.837 "ddgst": ${ddgst:-false} 00:08:53.837 }, 00:08:53.837 "method": "bdev_nvme_attach_controller" 00:08:53.837 } 00:08:53.837 EOF 00:08:53.837 )") 00:08:53.837 04:23:56 -- nvmf/common.sh@544 -- # jq . 00:08:53.837 04:23:56 -- nvmf/common.sh@542 -- # cat 00:08:53.837 04:23:56 -- nvmf/common.sh@544 -- # jq . 00:08:53.837 04:23:56 -- nvmf/common.sh@545 -- # IFS=, 00:08:53.837 04:23:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:53.837 "params": { 00:08:53.837 "name": "Nvme1", 00:08:53.837 "trtype": "tcp", 00:08:53.837 "traddr": "10.0.0.2", 00:08:53.837 "adrfam": "ipv4", 00:08:53.837 "trsvcid": "4420", 00:08:53.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.837 "hdgst": false, 00:08:53.837 "ddgst": false 00:08:53.837 }, 00:08:53.837 "method": "bdev_nvme_attach_controller" 00:08:53.837 }' 00:08:53.837 04:23:56 -- nvmf/common.sh@544 -- # jq . 00:08:53.837 04:23:56 -- nvmf/common.sh@544 -- # jq . 00:08:53.837 04:23:56 -- nvmf/common.sh@545 -- # IFS=, 00:08:53.837 04:23:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:53.837 "params": { 00:08:53.837 "name": "Nvme1", 00:08:53.837 "trtype": "tcp", 00:08:53.837 "traddr": "10.0.0.2", 00:08:53.837 "adrfam": "ipv4", 00:08:53.837 "trsvcid": "4420", 00:08:53.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.837 "hdgst": false, 00:08:53.837 "ddgst": false 00:08:53.837 }, 00:08:53.837 "method": "bdev_nvme_attach_controller" 00:08:53.837 }' 00:08:53.837 04:23:56 -- nvmf/common.sh@545 -- # IFS=, 00:08:53.837 04:23:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:53.837 "params": { 00:08:53.837 "name": "Nvme1", 00:08:53.837 "trtype": "tcp", 00:08:53.837 "traddr": "10.0.0.2", 00:08:53.837 "adrfam": "ipv4", 00:08:53.837 "trsvcid": "4420", 00:08:53.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.837 "hdgst": false, 00:08:53.837 "ddgst": false 00:08:53.837 }, 00:08:53.837 "method": "bdev_nvme_attach_controller" 00:08:53.837 }' 00:08:53.837 04:23:56 -- nvmf/common.sh@545 -- # IFS=, 00:08:53.837 04:23:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:53.837 "params": { 00:08:53.837 "name": "Nvme1", 00:08:53.837 "trtype": "tcp", 00:08:53.837 "traddr": "10.0.0.2", 00:08:53.837 "adrfam": "ipv4", 00:08:53.837 "trsvcid": "4420", 00:08:53.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.837 "hdgst": false, 00:08:53.837 "ddgst": false 00:08:53.837 }, 00:08:53.837 "method": "bdev_nvme_attach_controller" 00:08:53.837 }' 00:08:53.837 [2024-12-07 04:23:56.979368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.837 [2024-12-07 04:23:56.979471] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:53.837 [2024-12-07 04:23:56.979729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.837 [2024-12-07 04:23:56.979796] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:53.837 [2024-12-07 04:23:56.991283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.837 [2024-12-07 04:23:56.991970] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:53.837 04:23:57 -- target/bdev_io_wait.sh@37 -- # wait 61603 00:08:53.837 [2024-12-07 04:23:57.015168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.837 [2024-12-07 04:23:57.015249] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:54.096 [2024-12-07 04:23:57.163863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.096 [2024-12-07 04:23:57.203861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.096 [2024-12-07 04:23:57.230502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:54.096 [2024-12-07 04:23:57.245216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.096 [2024-12-07 04:23:57.257616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.096 [2024-12-07 04:23:57.285906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.096 [2024-12-07 04:23:57.298770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:54.354 [2024-12-07 04:23:57.338323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:54.354 Running I/O for 1 seconds... 00:08:54.354 Running I/O for 1 seconds... 00:08:54.354 Running I/O for 1 seconds... 00:08:54.354 Running I/O for 1 seconds... 00:08:55.290 00:08:55.290 Latency(us) 00:08:55.290 [2024-12-07T04:23:58.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.290 [2024-12-07T04:23:58.530Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:55.290 Nvme1n1 : 1.01 10903.21 42.59 0.00 0.00 11697.33 6315.29 20614.05 00:08:55.290 [2024-12-07T04:23:58.530Z] =================================================================================================================== 00:08:55.290 [2024-12-07T04:23:58.530Z] Total : 10903.21 42.59 0.00 0.00 11697.33 6315.29 20614.05 00:08:55.290 00:08:55.290 Latency(us) 00:08:55.290 [2024-12-07T04:23:58.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.290 [2024-12-07T04:23:58.530Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:55.290 Nvme1n1 : 1.01 8285.24 32.36 0.00 0.00 15370.35 8519.68 27644.28 00:08:55.290 [2024-12-07T04:23:58.530Z] =================================================================================================================== 00:08:55.290 [2024-12-07T04:23:58.530Z] Total : 8285.24 32.36 0.00 0.00 15370.35 8519.68 27644.28 00:08:55.290 00:08:55.290 Latency(us) 00:08:55.290 [2024-12-07T04:23:58.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.290 [2024-12-07T04:23:58.530Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:55.290 Nvme1n1 : 1.00 171170.80 668.64 0.00 0.00 745.04 366.78 1236.25 00:08:55.290 [2024-12-07T04:23:58.530Z] =================================================================================================================== 00:08:55.290 [2024-12-07T04:23:58.530Z] Total : 171170.80 668.64 0.00 0.00 745.04 366.78 1236.25 00:08:55.290 00:08:55.290 Latency(us) 00:08:55.290 [2024-12-07T04:23:58.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.290 [2024-12-07T04:23:58.530Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:55.290 Nvme1n1 : 1.01 8051.88 31.45 0.00 0.00 15817.81 7119.59 23473.80 00:08:55.290 [2024-12-07T04:23:58.530Z] =================================================================================================================== 00:08:55.290 [2024-12-07T04:23:58.530Z] Total : 8051.88 31.45 0.00 0.00 15817.81 7119.59 23473.80 00:08:55.548 04:23:58 -- target/bdev_io_wait.sh@38 -- # wait 61605 00:08:55.548 04:23:58 -- target/bdev_io_wait.sh@39 -- # wait 61606 00:08:55.548 04:23:58 -- target/bdev_io_wait.sh@40 -- # wait 61610 00:08:55.548 04:23:58 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.548 04:23:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.548 04:23:58 -- common/autotest_common.sh@10 -- # set +x 00:08:55.548 04:23:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.548 04:23:58 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:55.548 04:23:58 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:55.548 04:23:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:55.548 04:23:58 -- nvmf/common.sh@116 -- # sync 00:08:55.548 04:23:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:55.548 04:23:58 -- nvmf/common.sh@119 -- # set +e 00:08:55.548 04:23:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:55.548 04:23:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:55.548 rmmod nvme_tcp 00:08:55.548 rmmod nvme_fabrics 00:08:55.548 rmmod nvme_keyring 00:08:55.548 04:23:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:55.548 04:23:58 -- nvmf/common.sh@123 -- # set -e 00:08:55.548 04:23:58 -- nvmf/common.sh@124 -- # return 0 00:08:55.548 04:23:58 -- nvmf/common.sh@477 -- # '[' -n 61568 ']' 00:08:55.549 04:23:58 -- nvmf/common.sh@478 -- # killprocess 61568 00:08:55.549 04:23:58 -- common/autotest_common.sh@936 -- # '[' -z 61568 ']' 00:08:55.549 04:23:58 -- common/autotest_common.sh@940 -- # kill -0 61568 00:08:55.549 04:23:58 -- common/autotest_common.sh@941 -- # uname 00:08:55.549 04:23:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:55.549 04:23:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61568 00:08:55.807 04:23:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:55.807 04:23:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:55.807 04:23:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61568' 00:08:55.807 killing process with pid 61568 00:08:55.807 04:23:58 -- common/autotest_common.sh@955 -- # kill 61568 00:08:55.807 04:23:58 -- common/autotest_common.sh@960 -- # wait 61568 00:08:55.807 04:23:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:55.807 04:23:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:55.807 04:23:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:55.807 04:23:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.807 04:23:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:55.807 04:23:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.807 04:23:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.807 04:23:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.807 04:23:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:55.807 00:08:55.807 real 0m3.787s 00:08:55.807 user 0m16.464s 00:08:55.807 sys 0m1.845s 00:08:55.807 04:23:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.807 ************************************ 00:08:55.807 END TEST nvmf_bdev_io_wait 00:08:55.807 ************************************ 00:08:55.807 04:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:55.807 04:23:59 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:55.807 04:23:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:55.807 04:23:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.807 04:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:56.067 ************************************ 00:08:56.067 START TEST nvmf_queue_depth 00:08:56.067 ************************************ 00:08:56.067 04:23:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.067 * Looking for test storage... 00:08:56.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:56.067 04:23:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:56.067 04:23:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:56.067 04:23:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:56.067 04:23:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:56.067 04:23:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:56.067 04:23:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:56.067 04:23:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:56.067 04:23:59 -- scripts/common.sh@335 -- # IFS=.-: 00:08:56.067 04:23:59 -- scripts/common.sh@335 -- # read -ra ver1 00:08:56.067 04:23:59 -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.067 04:23:59 -- scripts/common.sh@336 -- # read -ra ver2 00:08:56.067 04:23:59 -- scripts/common.sh@337 -- # local 'op=<' 00:08:56.067 04:23:59 -- scripts/common.sh@339 -- # ver1_l=2 00:08:56.067 04:23:59 -- scripts/common.sh@340 -- # ver2_l=1 00:08:56.067 04:23:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:56.067 04:23:59 -- scripts/common.sh@343 -- # case "$op" in 00:08:56.067 04:23:59 -- scripts/common.sh@344 -- # : 1 00:08:56.067 04:23:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:56.067 04:23:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.067 04:23:59 -- scripts/common.sh@364 -- # decimal 1 00:08:56.067 04:23:59 -- scripts/common.sh@352 -- # local d=1 00:08:56.067 04:23:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.067 04:23:59 -- scripts/common.sh@354 -- # echo 1 00:08:56.067 04:23:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:56.067 04:23:59 -- scripts/common.sh@365 -- # decimal 2 00:08:56.067 04:23:59 -- scripts/common.sh@352 -- # local d=2 00:08:56.067 04:23:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.067 04:23:59 -- scripts/common.sh@354 -- # echo 2 00:08:56.067 04:23:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:56.067 04:23:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:56.067 04:23:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:56.067 04:23:59 -- scripts/common.sh@367 -- # return 0 00:08:56.067 04:23:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.067 04:23:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.067 --rc genhtml_branch_coverage=1 00:08:56.067 --rc genhtml_function_coverage=1 00:08:56.067 --rc genhtml_legend=1 00:08:56.067 --rc geninfo_all_blocks=1 00:08:56.067 --rc geninfo_unexecuted_blocks=1 00:08:56.067 00:08:56.067 ' 00:08:56.067 04:23:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.067 --rc genhtml_branch_coverage=1 00:08:56.067 --rc genhtml_function_coverage=1 00:08:56.067 --rc genhtml_legend=1 00:08:56.067 --rc geninfo_all_blocks=1 00:08:56.067 --rc geninfo_unexecuted_blocks=1 00:08:56.067 00:08:56.067 ' 00:08:56.067 04:23:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.067 --rc genhtml_branch_coverage=1 00:08:56.067 --rc genhtml_function_coverage=1 00:08:56.067 --rc genhtml_legend=1 00:08:56.067 --rc geninfo_all_blocks=1 00:08:56.067 --rc geninfo_unexecuted_blocks=1 00:08:56.067 00:08:56.067 ' 00:08:56.067 04:23:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.067 --rc genhtml_branch_coverage=1 00:08:56.067 --rc genhtml_function_coverage=1 00:08:56.067 --rc genhtml_legend=1 00:08:56.067 --rc geninfo_all_blocks=1 00:08:56.067 --rc geninfo_unexecuted_blocks=1 00:08:56.067 00:08:56.067 ' 00:08:56.067 04:23:59 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.067 04:23:59 -- nvmf/common.sh@7 -- # uname -s 00:08:56.067 04:23:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.067 04:23:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.067 04:23:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.067 04:23:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.067 04:23:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.067 04:23:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.067 04:23:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.067 04:23:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.067 04:23:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.067 04:23:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.067 04:23:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:08:56.067 04:23:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:08:56.067 04:23:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.067 04:23:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.067 04:23:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:56.067 04:23:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.067 04:23:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.067 04:23:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.067 04:23:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.067 04:23:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.067 04:23:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.067 04:23:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.067 04:23:59 -- paths/export.sh@5 -- # export PATH 00:08:56.067 04:23:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.067 04:23:59 -- nvmf/common.sh@46 -- # : 0 00:08:56.067 04:23:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:56.067 04:23:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:56.067 04:23:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:56.067 04:23:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.067 04:23:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.067 04:23:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:56.067 04:23:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:56.067 04:23:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:56.067 04:23:59 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:56.067 04:23:59 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:56.067 04:23:59 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:56.067 04:23:59 -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:56.067 04:23:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:56.067 04:23:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.067 04:23:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:56.067 04:23:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:56.067 04:23:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:56.067 04:23:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.067 04:23:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.067 04:23:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.067 04:23:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:56.067 04:23:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:56.067 04:23:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:56.067 04:23:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:56.067 04:23:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:56.067 04:23:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:56.067 04:23:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.067 04:23:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.067 04:23:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:56.067 04:23:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:56.068 04:23:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:56.068 04:23:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:56.068 04:23:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:56.068 04:23:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.068 04:23:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:56.068 04:23:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:56.068 04:23:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:56.068 04:23:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:56.068 04:23:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:56.068 04:23:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:56.068 Cannot find device "nvmf_tgt_br" 00:08:56.068 04:23:59 -- nvmf/common.sh@154 -- # true 00:08:56.068 04:23:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.068 Cannot find device "nvmf_tgt_br2" 00:08:56.068 04:23:59 -- nvmf/common.sh@155 -- # true 00:08:56.068 04:23:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:56.068 04:23:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:56.068 Cannot find device "nvmf_tgt_br" 00:08:56.068 04:23:59 -- nvmf/common.sh@157 -- # true 00:08:56.068 04:23:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:56.068 Cannot find device "nvmf_tgt_br2" 00:08:56.068 04:23:59 -- nvmf/common.sh@158 -- # true 00:08:56.068 04:23:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:56.326 04:23:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:56.326 04:23:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.326 04:23:59 -- nvmf/common.sh@161 -- # true 00:08:56.326 04:23:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.326 04:23:59 -- nvmf/common.sh@162 -- # true 00:08:56.326 04:23:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.326 04:23:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.326 04:23:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.326 04:23:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.326 04:23:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.326 04:23:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.326 04:23:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.326 04:23:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:56.326 04:23:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:56.326 04:23:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:56.326 04:23:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:56.326 04:23:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:56.326 04:23:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:56.326 04:23:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.326 04:23:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.326 04:23:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.326 04:23:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:56.326 04:23:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:56.326 04:23:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.326 04:23:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.326 04:23:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:56.326 04:23:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:56.326 04:23:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:56.326 04:23:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:56.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:08:56.326 00:08:56.326 --- 10.0.0.2 ping statistics --- 00:08:56.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.326 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:56.326 04:23:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:56.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:56.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:08:56.326 00:08:56.326 --- 10.0.0.3 ping statistics --- 00:08:56.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.326 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:56.326 04:23:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:56.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:56.326 00:08:56.326 --- 10.0.0.1 ping statistics --- 00:08:56.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.326 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:56.326 04:23:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.326 04:23:59 -- nvmf/common.sh@421 -- # return 0 00:08:56.326 04:23:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:56.326 04:23:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.326 04:23:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:56.326 04:23:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:56.326 04:23:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.326 04:23:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:56.326 04:23:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:56.326 04:23:59 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:56.326 04:23:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:56.326 04:23:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.326 04:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:56.326 04:23:59 -- nvmf/common.sh@469 -- # nvmfpid=61815 00:08:56.326 04:23:59 -- nvmf/common.sh@470 -- # waitforlisten 61815 00:08:56.326 04:23:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:56.326 04:23:59 -- common/autotest_common.sh@829 -- # '[' -z 61815 ']' 00:08:56.327 04:23:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.327 04:23:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.585 04:23:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.585 04:23:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.585 04:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:56.585 [2024-12-07 04:23:59.612953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.585 [2024-12-07 04:23:59.613068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.585 [2024-12-07 04:23:59.749366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.585 [2024-12-07 04:23:59.799364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.585 [2024-12-07 04:23:59.799529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.585 [2024-12-07 04:23:59.799541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.585 [2024-12-07 04:23:59.799549] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.585 [2024-12-07 04:23:59.799572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.522 04:24:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.522 04:24:00 -- common/autotest_common.sh@862 -- # return 0 00:08:57.522 04:24:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:57.522 04:24:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.522 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.522 04:24:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.522 04:24:00 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.522 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.522 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.522 [2024-12-07 04:24:00.624125] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.522 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.522 04:24:00 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.522 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.522 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.522 Malloc0 00:08:57.522 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.522 04:24:00 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.522 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.522 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.522 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.522 04:24:00 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.522 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.522 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.522 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.522 04:24:00 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.522 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.522 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.522 [2024-12-07 04:24:00.676743] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.522 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.522 04:24:00 -- target/queue_depth.sh@30 -- # bdevperf_pid=61857 00:08:57.522 04:24:00 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:57.522 04:24:00 -- target/queue_depth.sh@33 -- # waitforlisten 61857 /var/tmp/bdevperf.sock 00:08:57.522 04:24:00 -- common/autotest_common.sh@829 -- # '[' -z 61857 ']' 00:08:57.522 04:24:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:57.522 04:24:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:57.522 04:24:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:57.522 04:24:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.522 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.522 04:24:00 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:57.522 [2024-12-07 04:24:00.739139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:57.522 [2024-12-07 04:24:00.739253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61857 ] 00:08:57.781 [2024-12-07 04:24:00.879340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.781 [2024-12-07 04:24:00.948575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.717 04:24:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.717 04:24:01 -- common/autotest_common.sh@862 -- # return 0 00:08:58.717 04:24:01 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:58.717 04:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.717 04:24:01 -- common/autotest_common.sh@10 -- # set +x 00:08:58.717 NVMe0n1 00:08:58.717 04:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.717 04:24:01 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:58.717 Running I/O for 10 seconds... 00:09:10.941 00:09:10.941 Latency(us) 00:09:10.941 [2024-12-07T04:24:14.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.941 [2024-12-07T04:24:14.181Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:10.941 Verification LBA range: start 0x0 length 0x4000 00:09:10.941 NVMe0n1 : 10.06 15045.96 58.77 0.00 0.00 67802.47 13702.98 56956.74 00:09:10.941 [2024-12-07T04:24:14.181Z] =================================================================================================================== 00:09:10.941 [2024-12-07T04:24:14.181Z] Total : 15045.96 58.77 0.00 0.00 67802.47 13702.98 56956.74 00:09:10.941 0 00:09:10.941 04:24:11 -- target/queue_depth.sh@39 -- # killprocess 61857 00:09:10.941 04:24:11 -- common/autotest_common.sh@936 -- # '[' -z 61857 ']' 00:09:10.941 04:24:11 -- common/autotest_common.sh@940 -- # kill -0 61857 00:09:10.941 04:24:11 -- common/autotest_common.sh@941 -- # uname 00:09:10.942 04:24:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:10.942 04:24:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61857 00:09:10.942 killing process with pid 61857 00:09:10.942 Received shutdown signal, test time was about 10.000000 seconds 00:09:10.942 00:09:10.942 Latency(us) 00:09:10.942 [2024-12-07T04:24:14.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.942 [2024-12-07T04:24:14.182Z] =================================================================================================================== 00:09:10.942 [2024-12-07T04:24:14.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:10.942 04:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:10.942 04:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:10.942 04:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61857' 00:09:10.942 04:24:12 -- common/autotest_common.sh@955 -- # kill 61857 00:09:10.942 04:24:12 -- common/autotest_common.sh@960 -- # wait 61857 00:09:10.942 04:24:12 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:10.942 04:24:12 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:10.942 04:24:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:10.942 04:24:12 -- nvmf/common.sh@116 -- # sync 00:09:10.942 04:24:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:10.942 04:24:12 -- nvmf/common.sh@119 -- # set +e 00:09:10.942 04:24:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:10.942 04:24:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:10.942 rmmod nvme_tcp 00:09:10.942 rmmod nvme_fabrics 00:09:10.942 rmmod nvme_keyring 00:09:10.942 04:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:10.942 04:24:12 -- nvmf/common.sh@123 -- # set -e 00:09:10.942 04:24:12 -- nvmf/common.sh@124 -- # return 0 00:09:10.942 04:24:12 -- nvmf/common.sh@477 -- # '[' -n 61815 ']' 00:09:10.942 04:24:12 -- nvmf/common.sh@478 -- # killprocess 61815 00:09:10.942 04:24:12 -- common/autotest_common.sh@936 -- # '[' -z 61815 ']' 00:09:10.942 04:24:12 -- common/autotest_common.sh@940 -- # kill -0 61815 00:09:10.942 04:24:12 -- common/autotest_common.sh@941 -- # uname 00:09:10.942 04:24:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:10.942 04:24:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61815 00:09:10.942 killing process with pid 61815 00:09:10.942 04:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:10.942 04:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:10.942 04:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61815' 00:09:10.942 04:24:12 -- common/autotest_common.sh@955 -- # kill 61815 00:09:10.942 04:24:12 -- common/autotest_common.sh@960 -- # wait 61815 00:09:10.942 04:24:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:10.942 04:24:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:10.942 04:24:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:10.942 04:24:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.942 04:24:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:10.942 04:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.942 04:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.942 04:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.942 04:24:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:10.942 00:09:10.942 real 0m13.506s 00:09:10.942 user 0m23.752s 00:09:10.942 sys 0m1.827s 00:09:10.942 04:24:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.942 ************************************ 00:09:10.942 END TEST nvmf_queue_depth 00:09:10.942 ************************************ 00:09:10.942 04:24:12 -- common/autotest_common.sh@10 -- # set +x 00:09:10.942 04:24:12 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:10.942 04:24:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:10.942 04:24:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.942 04:24:12 -- common/autotest_common.sh@10 -- # set +x 00:09:10.942 ************************************ 00:09:10.942 START TEST nvmf_multipath 00:09:10.942 ************************************ 00:09:10.942 04:24:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:10.942 * Looking for test storage... 00:09:10.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:10.942 04:24:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:10.942 04:24:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:10.942 04:24:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:10.942 04:24:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:10.942 04:24:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:10.942 04:24:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:10.942 04:24:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:10.942 04:24:12 -- scripts/common.sh@335 -- # IFS=.-: 00:09:10.942 04:24:12 -- scripts/common.sh@335 -- # read -ra ver1 00:09:10.942 04:24:12 -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.942 04:24:12 -- scripts/common.sh@336 -- # read -ra ver2 00:09:10.942 04:24:12 -- scripts/common.sh@337 -- # local 'op=<' 00:09:10.942 04:24:12 -- scripts/common.sh@339 -- # ver1_l=2 00:09:10.942 04:24:12 -- scripts/common.sh@340 -- # ver2_l=1 00:09:10.942 04:24:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:10.942 04:24:12 -- scripts/common.sh@343 -- # case "$op" in 00:09:10.942 04:24:12 -- scripts/common.sh@344 -- # : 1 00:09:10.942 04:24:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:10.942 04:24:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.942 04:24:12 -- scripts/common.sh@364 -- # decimal 1 00:09:10.942 04:24:12 -- scripts/common.sh@352 -- # local d=1 00:09:10.942 04:24:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.942 04:24:12 -- scripts/common.sh@354 -- # echo 1 00:09:10.942 04:24:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:10.942 04:24:12 -- scripts/common.sh@365 -- # decimal 2 00:09:10.942 04:24:12 -- scripts/common.sh@352 -- # local d=2 00:09:10.942 04:24:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.942 04:24:12 -- scripts/common.sh@354 -- # echo 2 00:09:10.942 04:24:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:10.942 04:24:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:10.942 04:24:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:10.942 04:24:12 -- scripts/common.sh@367 -- # return 0 00:09:10.942 04:24:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.942 04:24:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:10.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.942 --rc genhtml_branch_coverage=1 00:09:10.942 --rc genhtml_function_coverage=1 00:09:10.942 --rc genhtml_legend=1 00:09:10.942 --rc geninfo_all_blocks=1 00:09:10.942 --rc geninfo_unexecuted_blocks=1 00:09:10.942 00:09:10.942 ' 00:09:10.942 04:24:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:10.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.942 --rc genhtml_branch_coverage=1 00:09:10.942 --rc genhtml_function_coverage=1 00:09:10.942 --rc genhtml_legend=1 00:09:10.942 --rc geninfo_all_blocks=1 00:09:10.942 --rc geninfo_unexecuted_blocks=1 00:09:10.942 00:09:10.942 ' 00:09:10.942 04:24:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:10.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.942 --rc genhtml_branch_coverage=1 00:09:10.942 --rc genhtml_function_coverage=1 00:09:10.942 --rc genhtml_legend=1 00:09:10.942 --rc geninfo_all_blocks=1 00:09:10.942 --rc geninfo_unexecuted_blocks=1 00:09:10.942 00:09:10.942 ' 00:09:10.942 04:24:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:10.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.942 --rc genhtml_branch_coverage=1 00:09:10.942 --rc genhtml_function_coverage=1 00:09:10.942 --rc genhtml_legend=1 00:09:10.942 --rc geninfo_all_blocks=1 00:09:10.942 --rc geninfo_unexecuted_blocks=1 00:09:10.942 00:09:10.942 ' 00:09:10.942 04:24:12 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:10.942 04:24:12 -- nvmf/common.sh@7 -- # uname -s 00:09:10.942 04:24:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.942 04:24:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.942 04:24:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.942 04:24:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.942 04:24:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.942 04:24:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.942 04:24:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.942 04:24:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.942 04:24:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.942 04:24:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.942 04:24:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:10.942 04:24:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:10.942 04:24:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.942 04:24:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.942 04:24:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:10.942 04:24:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.942 04:24:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.942 04:24:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.942 04:24:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.942 04:24:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.943 04:24:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.943 04:24:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.943 04:24:12 -- paths/export.sh@5 -- # export PATH 00:09:10.943 04:24:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.943 04:24:12 -- nvmf/common.sh@46 -- # : 0 00:09:10.943 04:24:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:10.943 04:24:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:10.943 04:24:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:10.943 04:24:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.943 04:24:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.943 04:24:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:10.943 04:24:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:10.943 04:24:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:10.943 04:24:12 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.943 04:24:12 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.943 04:24:12 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:10.943 04:24:12 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.943 04:24:12 -- target/multipath.sh@43 -- # nvmftestinit 00:09:10.943 04:24:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:10.943 04:24:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.943 04:24:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:10.943 04:24:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:10.943 04:24:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:10.943 04:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.943 04:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.943 04:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.943 04:24:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:10.943 04:24:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:10.943 04:24:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:10.943 04:24:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:10.943 04:24:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:10.943 04:24:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:10.943 04:24:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.943 04:24:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.943 04:24:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:10.943 04:24:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:10.943 04:24:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:10.943 04:24:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:10.943 04:24:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:10.943 04:24:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.943 04:24:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:10.943 04:24:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:10.943 04:24:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:10.943 04:24:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:10.943 04:24:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:10.943 04:24:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:10.943 Cannot find device "nvmf_tgt_br" 00:09:10.943 04:24:12 -- nvmf/common.sh@154 -- # true 00:09:10.943 04:24:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.943 Cannot find device "nvmf_tgt_br2" 00:09:10.943 04:24:12 -- nvmf/common.sh@155 -- # true 00:09:10.943 04:24:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:10.943 04:24:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:10.943 Cannot find device "nvmf_tgt_br" 00:09:10.943 04:24:12 -- nvmf/common.sh@157 -- # true 00:09:10.943 04:24:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:10.943 Cannot find device "nvmf_tgt_br2" 00:09:10.943 04:24:12 -- nvmf/common.sh@158 -- # true 00:09:10.943 04:24:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:10.943 04:24:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:10.943 04:24:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.943 04:24:12 -- nvmf/common.sh@161 -- # true 00:09:10.943 04:24:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.943 04:24:12 -- nvmf/common.sh@162 -- # true 00:09:10.943 04:24:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.943 04:24:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.943 04:24:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.943 04:24:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.943 04:24:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.943 04:24:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.943 04:24:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.943 04:24:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:10.943 04:24:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:10.943 04:24:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:10.943 04:24:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:10.943 04:24:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:10.943 04:24:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:10.943 04:24:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.943 04:24:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.943 04:24:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.943 04:24:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:10.943 04:24:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:10.943 04:24:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.943 04:24:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.943 04:24:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.943 04:24:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.943 04:24:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.943 04:24:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:10.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:09:10.943 00:09:10.943 --- 10.0.0.2 ping statistics --- 00:09:10.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.943 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:10.943 04:24:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:10.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:10.943 00:09:10.943 --- 10.0.0.3 ping statistics --- 00:09:10.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.943 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:10.943 04:24:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:10.943 00:09:10.943 --- 10.0.0.1 ping statistics --- 00:09:10.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.943 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:10.943 04:24:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.943 04:24:13 -- nvmf/common.sh@421 -- # return 0 00:09:10.943 04:24:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:10.943 04:24:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.943 04:24:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:10.943 04:24:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:10.943 04:24:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.943 04:24:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:10.943 04:24:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:10.943 04:24:13 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:10.943 04:24:13 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:10.943 04:24:13 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:10.943 04:24:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:10.943 04:24:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.943 04:24:13 -- common/autotest_common.sh@10 -- # set +x 00:09:10.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.943 04:24:13 -- nvmf/common.sh@469 -- # nvmfpid=62183 00:09:10.943 04:24:13 -- nvmf/common.sh@470 -- # waitforlisten 62183 00:09:10.943 04:24:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.943 04:24:13 -- common/autotest_common.sh@829 -- # '[' -z 62183 ']' 00:09:10.943 04:24:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.943 04:24:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.943 04:24:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.943 04:24:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.943 04:24:13 -- common/autotest_common.sh@10 -- # set +x 00:09:10.943 [2024-12-07 04:24:13.251455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:10.943 [2024-12-07 04:24:13.251855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.944 [2024-12-07 04:24:13.393258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.944 [2024-12-07 04:24:13.465553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:10.944 [2024-12-07 04:24:13.465968] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.944 [2024-12-07 04:24:13.466125] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.944 [2024-12-07 04:24:13.466274] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.944 [2024-12-07 04:24:13.466686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.944 [2024-12-07 04:24:13.466753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.944 [2024-12-07 04:24:13.466887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.944 [2024-12-07 04:24:13.466896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.202 04:24:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.202 04:24:14 -- common/autotest_common.sh@862 -- # return 0 00:09:11.202 04:24:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:11.202 04:24:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.202 04:24:14 -- common/autotest_common.sh@10 -- # set +x 00:09:11.202 04:24:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.202 04:24:14 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.461 [2024-12-07 04:24:14.582178] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.461 04:24:14 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:11.720 Malloc0 00:09:11.720 04:24:14 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:11.978 04:24:15 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.237 04:24:15 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.496 [2024-12-07 04:24:15.572026] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.496 04:24:15 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:12.754 [2024-12-07 04:24:15.860308] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:12.754 04:24:15 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:13.013 04:24:16 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:13.013 04:24:16 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.013 04:24:16 -- common/autotest_common.sh@1187 -- # local i=0 00:09:13.013 04:24:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.013 04:24:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:13.013 04:24:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:14.916 04:24:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:15.174 04:24:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:15.174 04:24:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.174 04:24:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:15.174 04:24:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.174 04:24:18 -- common/autotest_common.sh@1197 -- # return 0 00:09:15.174 04:24:18 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:15.174 04:24:18 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:15.175 04:24:18 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:15.175 04:24:18 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:15.175 04:24:18 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:15.175 04:24:18 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:15.175 04:24:18 -- target/multipath.sh@38 -- # return 0 00:09:15.175 04:24:18 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:15.175 04:24:18 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:15.175 04:24:18 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:15.175 04:24:18 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:15.175 04:24:18 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:15.175 04:24:18 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:15.175 04:24:18 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:15.175 04:24:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:15.175 04:24:18 -- target/multipath.sh@22 -- # local timeout=20 00:09:15.175 04:24:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.175 04:24:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.175 04:24:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.175 04:24:18 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:15.175 04:24:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:15.175 04:24:18 -- target/multipath.sh@22 -- # local timeout=20 00:09:15.175 04:24:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.175 04:24:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.175 04:24:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.175 04:24:18 -- target/multipath.sh@85 -- # echo numa 00:09:15.175 04:24:18 -- target/multipath.sh@88 -- # fio_pid=62278 00:09:15.175 04:24:18 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:15.175 04:24:18 -- target/multipath.sh@90 -- # sleep 1 00:09:15.175 [global] 00:09:15.175 thread=1 00:09:15.175 invalidate=1 00:09:15.175 rw=randrw 00:09:15.175 time_based=1 00:09:15.175 runtime=6 00:09:15.175 ioengine=libaio 00:09:15.175 direct=1 00:09:15.175 bs=4096 00:09:15.175 iodepth=128 00:09:15.175 norandommap=0 00:09:15.175 numjobs=1 00:09:15.175 00:09:15.175 verify_dump=1 00:09:15.175 verify_backlog=512 00:09:15.175 verify_state_save=0 00:09:15.175 do_verify=1 00:09:15.175 verify=crc32c-intel 00:09:15.175 [job0] 00:09:15.175 filename=/dev/nvme0n1 00:09:15.175 Could not set queue depth (nvme0n1) 00:09:15.175 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.175 fio-3.35 00:09:15.175 Starting 1 thread 00:09:16.108 04:24:19 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:16.366 04:24:19 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:16.624 04:24:19 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:16.624 04:24:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:16.624 04:24:19 -- target/multipath.sh@22 -- # local timeout=20 00:09:16.624 04:24:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:16.624 04:24:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:16.624 04:24:19 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:16.624 04:24:19 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:16.624 04:24:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:16.624 04:24:19 -- target/multipath.sh@22 -- # local timeout=20 00:09:16.624 04:24:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:16.624 04:24:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:16.624 04:24:19 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:16.624 04:24:19 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:16.882 04:24:20 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:17.143 04:24:20 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:17.143 04:24:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:17.143 04:24:20 -- target/multipath.sh@22 -- # local timeout=20 00:09:17.143 04:24:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.143 04:24:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.143 04:24:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.143 04:24:20 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:17.143 04:24:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:17.143 04:24:20 -- target/multipath.sh@22 -- # local timeout=20 00:09:17.143 04:24:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.143 04:24:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.143 04:24:20 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:17.143 04:24:20 -- target/multipath.sh@104 -- # wait 62278 00:09:21.339 00:09:21.339 job0: (groupid=0, jobs=1): err= 0: pid=62299: Sat Dec 7 04:24:24 2024 00:09:21.340 read: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(258MiB/6006msec) 00:09:21.340 slat (usec): min=3, max=5701, avg=52.22, stdev=217.77 00:09:21.340 clat (usec): min=961, max=14413, avg=7780.53, stdev=1326.90 00:09:21.340 lat (usec): min=998, max=14422, avg=7832.75, stdev=1332.19 00:09:21.340 clat percentiles (usec): 00:09:21.340 | 1.00th=[ 4293], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 6980], 00:09:21.340 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 7898], 00:09:21.340 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10683], 00:09:21.340 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12911], 99.95th=[13173], 00:09:21.340 | 99.99th=[13698] 00:09:21.340 bw ( KiB/s): min=10000, max=28104, per=53.03%, avg=23338.18, stdev=6366.64, samples=11 00:09:21.340 iops : min= 2500, max= 7026, avg=5834.55, stdev=1591.66, samples=11 00:09:21.340 write: IOPS=6695, BW=26.2MiB/s (27.4MB/s)(140MiB/5352msec); 0 zone resets 00:09:21.340 slat (usec): min=14, max=2051, avg=61.92, stdev=150.69 00:09:21.340 clat (usec): min=901, max=13538, avg=6912.69, stdev=1165.84 00:09:21.340 lat (usec): min=974, max=13568, avg=6974.61, stdev=1169.94 00:09:21.340 clat percentiles (usec): 00:09:21.340 | 1.00th=[ 3326], 5.00th=[ 4228], 10.00th=[ 5538], 20.00th=[ 6390], 00:09:21.340 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242], 00:09:21.340 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8160], 00:09:21.340 | 99.00th=[10421], 99.50th=[10945], 99.90th=[12125], 99.95th=[12518], 00:09:21.340 | 99.99th=[12780] 00:09:21.340 bw ( KiB/s): min=10344, max=28224, per=87.24%, avg=23364.36, stdev=6197.66, samples=11 00:09:21.340 iops : min= 2586, max= 7056, avg=5841.09, stdev=1549.41, samples=11 00:09:21.340 lat (usec) : 1000=0.01% 00:09:21.340 lat (msec) : 2=0.03%, 4=1.67%, 10=93.73%, 20=4.57% 00:09:21.340 cpu : usr=5.73%, sys=21.85%, ctx=5856, majf=0, minf=108 00:09:21.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:21.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.340 issued rwts: total=66083,35833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.340 00:09:21.340 Run status group 0 (all jobs): 00:09:21.340 READ: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=258MiB (271MB), run=6006-6006msec 00:09:21.340 WRITE: bw=26.2MiB/s (27.4MB/s), 26.2MiB/s-26.2MiB/s (27.4MB/s-27.4MB/s), io=140MiB (147MB), run=5352-5352msec 00:09:21.340 00:09:21.340 Disk stats (read/write): 00:09:21.340 nvme0n1: ios=65393/34940, merge=0/0, ticks=486153/225938, in_queue=712091, util=98.65% 00:09:21.340 04:24:24 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:21.599 04:24:24 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:22.166 04:24:25 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:22.166 04:24:25 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:22.166 04:24:25 -- target/multipath.sh@22 -- # local timeout=20 00:09:22.166 04:24:25 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.166 04:24:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.166 04:24:25 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:22.166 04:24:25 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:22.166 04:24:25 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:22.166 04:24:25 -- target/multipath.sh@22 -- # local timeout=20 00:09:22.166 04:24:25 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.166 04:24:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.166 04:24:25 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:22.166 04:24:25 -- target/multipath.sh@113 -- # echo round-robin 00:09:22.166 04:24:25 -- target/multipath.sh@116 -- # fio_pid=62381 00:09:22.166 04:24:25 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:22.166 04:24:25 -- target/multipath.sh@118 -- # sleep 1 00:09:22.166 [global] 00:09:22.166 thread=1 00:09:22.166 invalidate=1 00:09:22.166 rw=randrw 00:09:22.166 time_based=1 00:09:22.166 runtime=6 00:09:22.166 ioengine=libaio 00:09:22.166 direct=1 00:09:22.166 bs=4096 00:09:22.166 iodepth=128 00:09:22.166 norandommap=0 00:09:22.166 numjobs=1 00:09:22.166 00:09:22.166 verify_dump=1 00:09:22.166 verify_backlog=512 00:09:22.166 verify_state_save=0 00:09:22.166 do_verify=1 00:09:22.166 verify=crc32c-intel 00:09:22.166 [job0] 00:09:22.166 filename=/dev/nvme0n1 00:09:22.166 Could not set queue depth (nvme0n1) 00:09:22.166 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.166 fio-3.35 00:09:22.166 Starting 1 thread 00:09:23.100 04:24:26 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:23.359 04:24:26 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:23.618 04:24:26 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:23.618 04:24:26 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:23.618 04:24:26 -- target/multipath.sh@22 -- # local timeout=20 00:09:23.618 04:24:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:23.618 04:24:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:23.618 04:24:26 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:23.618 04:24:26 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:23.618 04:24:26 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:23.618 04:24:26 -- target/multipath.sh@22 -- # local timeout=20 00:09:23.618 04:24:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:23.618 04:24:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.618 04:24:26 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:23.618 04:24:26 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:23.877 04:24:26 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:24.136 04:24:27 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:24.136 04:24:27 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:24.136 04:24:27 -- target/multipath.sh@22 -- # local timeout=20 00:09:24.136 04:24:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:24.136 04:24:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:24.136 04:24:27 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:24.136 04:24:27 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:24.136 04:24:27 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:24.136 04:24:27 -- target/multipath.sh@22 -- # local timeout=20 00:09:24.136 04:24:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:24.136 04:24:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:24.136 04:24:27 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:24.136 04:24:27 -- target/multipath.sh@132 -- # wait 62381 00:09:28.328 00:09:28.328 job0: (groupid=0, jobs=1): err= 0: pid=62402: Sat Dec 7 04:24:31 2024 00:09:28.328 read: IOPS=12.4k, BW=48.4MiB/s (50.7MB/s)(290MiB/6002msec) 00:09:28.328 slat (usec): min=7, max=6070, avg=40.45, stdev=189.83 00:09:28.328 clat (usec): min=1168, max=13960, avg=7134.71, stdev=1646.60 00:09:28.328 lat (usec): min=1191, max=13968, avg=7175.17, stdev=1660.69 00:09:28.328 clat percentiles (usec): 00:09:28.328 | 1.00th=[ 3294], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 5669], 00:09:28.328 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7308], 60.00th=[ 7570], 00:09:28.328 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9503], 00:09:28.328 | 99.00th=[11863], 99.50th=[12256], 99.90th=[12911], 99.95th=[13042], 00:09:28.328 | 99.99th=[13566] 00:09:28.328 bw ( KiB/s): min=12408, max=40440, per=52.91%, avg=26209.64, stdev=8626.39, samples=11 00:09:28.328 iops : min= 3102, max=10110, avg=6552.36, stdev=2156.60, samples=11 00:09:28.328 write: IOPS=7308, BW=28.5MiB/s (29.9MB/s)(149MiB/5218msec); 0 zone resets 00:09:28.328 slat (usec): min=14, max=2012, avg=51.39, stdev=127.20 00:09:28.329 clat (usec): min=1609, max=13588, avg=6074.53, stdev=1669.25 00:09:28.329 lat (usec): min=1690, max=13617, avg=6125.93, stdev=1683.57 00:09:28.329 clat percentiles (usec): 00:09:28.329 | 1.00th=[ 2638], 5.00th=[ 3228], 10.00th=[ 3621], 20.00th=[ 4228], 00:09:28.329 | 30.00th=[ 4948], 40.00th=[ 6128], 50.00th=[ 6652], 60.00th=[ 6980], 00:09:28.329 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8029], 00:09:28.329 | 99.00th=[ 9896], 99.50th=[10945], 99.90th=[11994], 99.95th=[12518], 00:09:28.329 | 99.99th=[13304] 00:09:28.329 bw ( KiB/s): min=12664, max=40792, per=89.56%, avg=26180.36, stdev=8468.01, samples=11 00:09:28.329 iops : min= 3166, max=10198, avg=6545.09, stdev=2117.00, samples=11 00:09:28.329 lat (msec) : 2=0.13%, 4=7.42%, 10=89.10%, 20=3.35% 00:09:28.329 cpu : usr=5.78%, sys=23.55%, ctx=6023, majf=0, minf=90 00:09:28.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:28.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.329 issued rwts: total=74326,38135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.329 00:09:28.329 Run status group 0 (all jobs): 00:09:28.329 READ: bw=48.4MiB/s (50.7MB/s), 48.4MiB/s-48.4MiB/s (50.7MB/s-50.7MB/s), io=290MiB (304MB), run=6002-6002msec 00:09:28.329 WRITE: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=149MiB (156MB), run=5218-5218msec 00:09:28.329 00:09:28.329 Disk stats (read/write): 00:09:28.329 nvme0n1: ios=72736/38135, merge=0/0, ticks=493333/215259, in_queue=708592, util=98.62% 00:09:28.329 04:24:31 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:28.329 04:24:31 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.329 04:24:31 -- common/autotest_common.sh@1208 -- # local i=0 00:09:28.329 04:24:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:28.329 04:24:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.329 04:24:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:28.329 04:24:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.329 04:24:31 -- common/autotest_common.sh@1220 -- # return 0 00:09:28.329 04:24:31 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.587 04:24:31 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:28.587 04:24:31 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:28.587 04:24:31 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:28.587 04:24:31 -- target/multipath.sh@144 -- # nvmftestfini 00:09:28.587 04:24:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:28.587 04:24:31 -- nvmf/common.sh@116 -- # sync 00:09:28.846 04:24:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:28.846 04:24:31 -- nvmf/common.sh@119 -- # set +e 00:09:28.846 04:24:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:28.846 04:24:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:28.846 rmmod nvme_tcp 00:09:28.846 rmmod nvme_fabrics 00:09:28.846 rmmod nvme_keyring 00:09:28.846 04:24:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:28.846 04:24:31 -- nvmf/common.sh@123 -- # set -e 00:09:28.846 04:24:31 -- nvmf/common.sh@124 -- # return 0 00:09:28.846 04:24:31 -- nvmf/common.sh@477 -- # '[' -n 62183 ']' 00:09:28.846 04:24:31 -- nvmf/common.sh@478 -- # killprocess 62183 00:09:28.846 04:24:31 -- common/autotest_common.sh@936 -- # '[' -z 62183 ']' 00:09:28.846 04:24:31 -- common/autotest_common.sh@940 -- # kill -0 62183 00:09:28.846 04:24:31 -- common/autotest_common.sh@941 -- # uname 00:09:28.846 04:24:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:28.846 04:24:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62183 00:09:28.846 killing process with pid 62183 00:09:28.846 04:24:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:28.846 04:24:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:28.846 04:24:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62183' 00:09:28.846 04:24:31 -- common/autotest_common.sh@955 -- # kill 62183 00:09:28.846 04:24:31 -- common/autotest_common.sh@960 -- # wait 62183 00:09:29.105 04:24:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:29.105 04:24:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:29.105 04:24:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:29.105 04:24:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.105 04:24:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:29.105 04:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.105 04:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.105 04:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.105 04:24:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:29.105 ************************************ 00:09:29.105 END TEST nvmf_multipath 00:09:29.105 ************************************ 00:09:29.105 00:09:29.105 real 0m19.551s 00:09:29.105 user 1m13.381s 00:09:29.105 sys 0m9.914s 00:09:29.105 04:24:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:29.105 04:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:29.105 04:24:32 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.105 04:24:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:29.105 04:24:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:29.105 04:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:29.105 ************************************ 00:09:29.105 START TEST nvmf_zcopy 00:09:29.105 ************************************ 00:09:29.105 04:24:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.105 * Looking for test storage... 00:09:29.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.105 04:24:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:29.105 04:24:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:29.105 04:24:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:29.365 04:24:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:29.365 04:24:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:29.365 04:24:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:29.365 04:24:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:29.365 04:24:32 -- scripts/common.sh@335 -- # IFS=.-: 00:09:29.365 04:24:32 -- scripts/common.sh@335 -- # read -ra ver1 00:09:29.365 04:24:32 -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.365 04:24:32 -- scripts/common.sh@336 -- # read -ra ver2 00:09:29.365 04:24:32 -- scripts/common.sh@337 -- # local 'op=<' 00:09:29.365 04:24:32 -- scripts/common.sh@339 -- # ver1_l=2 00:09:29.365 04:24:32 -- scripts/common.sh@340 -- # ver2_l=1 00:09:29.365 04:24:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:29.365 04:24:32 -- scripts/common.sh@343 -- # case "$op" in 00:09:29.365 04:24:32 -- scripts/common.sh@344 -- # : 1 00:09:29.365 04:24:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:29.365 04:24:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.365 04:24:32 -- scripts/common.sh@364 -- # decimal 1 00:09:29.365 04:24:32 -- scripts/common.sh@352 -- # local d=1 00:09:29.365 04:24:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.365 04:24:32 -- scripts/common.sh@354 -- # echo 1 00:09:29.365 04:24:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:29.365 04:24:32 -- scripts/common.sh@365 -- # decimal 2 00:09:29.365 04:24:32 -- scripts/common.sh@352 -- # local d=2 00:09:29.365 04:24:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.365 04:24:32 -- scripts/common.sh@354 -- # echo 2 00:09:29.365 04:24:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:29.365 04:24:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:29.365 04:24:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:29.365 04:24:32 -- scripts/common.sh@367 -- # return 0 00:09:29.365 04:24:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.365 04:24:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.365 --rc genhtml_branch_coverage=1 00:09:29.365 --rc genhtml_function_coverage=1 00:09:29.365 --rc genhtml_legend=1 00:09:29.365 --rc geninfo_all_blocks=1 00:09:29.365 --rc geninfo_unexecuted_blocks=1 00:09:29.365 00:09:29.365 ' 00:09:29.365 04:24:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.365 --rc genhtml_branch_coverage=1 00:09:29.365 --rc genhtml_function_coverage=1 00:09:29.365 --rc genhtml_legend=1 00:09:29.365 --rc geninfo_all_blocks=1 00:09:29.365 --rc geninfo_unexecuted_blocks=1 00:09:29.365 00:09:29.365 ' 00:09:29.365 04:24:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.365 --rc genhtml_branch_coverage=1 00:09:29.365 --rc genhtml_function_coverage=1 00:09:29.365 --rc genhtml_legend=1 00:09:29.365 --rc geninfo_all_blocks=1 00:09:29.365 --rc geninfo_unexecuted_blocks=1 00:09:29.365 00:09:29.365 ' 00:09:29.365 04:24:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.365 --rc genhtml_branch_coverage=1 00:09:29.365 --rc genhtml_function_coverage=1 00:09:29.365 --rc genhtml_legend=1 00:09:29.365 --rc geninfo_all_blocks=1 00:09:29.365 --rc geninfo_unexecuted_blocks=1 00:09:29.365 00:09:29.365 ' 00:09:29.365 04:24:32 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.365 04:24:32 -- nvmf/common.sh@7 -- # uname -s 00:09:29.365 04:24:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.365 04:24:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.365 04:24:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.365 04:24:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.365 04:24:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.365 04:24:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.365 04:24:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.365 04:24:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.365 04:24:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.365 04:24:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.365 04:24:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:29.365 04:24:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:29.365 04:24:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.365 04:24:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.365 04:24:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.365 04:24:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.365 04:24:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.365 04:24:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.365 04:24:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.365 04:24:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.365 04:24:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.366 04:24:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.366 04:24:32 -- paths/export.sh@5 -- # export PATH 00:09:29.366 04:24:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.366 04:24:32 -- nvmf/common.sh@46 -- # : 0 00:09:29.366 04:24:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:29.366 04:24:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:29.366 04:24:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:29.366 04:24:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.366 04:24:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.366 04:24:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:29.366 04:24:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:29.366 04:24:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:29.366 04:24:32 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:29.366 04:24:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:29.366 04:24:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.366 04:24:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:29.366 04:24:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:29.366 04:24:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:29.366 04:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.366 04:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.366 04:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.366 04:24:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:29.366 04:24:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:29.366 04:24:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:29.366 04:24:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:29.366 04:24:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:29.366 04:24:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:29.366 04:24:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.366 04:24:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.366 04:24:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:29.366 04:24:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:29.366 04:24:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.366 04:24:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.366 04:24:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.366 04:24:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.366 04:24:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.366 04:24:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.366 04:24:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.366 04:24:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.366 04:24:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:29.366 04:24:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:29.366 Cannot find device "nvmf_tgt_br" 00:09:29.366 04:24:32 -- nvmf/common.sh@154 -- # true 00:09:29.366 04:24:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.366 Cannot find device "nvmf_tgt_br2" 00:09:29.366 04:24:32 -- nvmf/common.sh@155 -- # true 00:09:29.366 04:24:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:29.366 04:24:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:29.366 Cannot find device "nvmf_tgt_br" 00:09:29.366 04:24:32 -- nvmf/common.sh@157 -- # true 00:09:29.366 04:24:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:29.366 Cannot find device "nvmf_tgt_br2" 00:09:29.366 04:24:32 -- nvmf/common.sh@158 -- # true 00:09:29.366 04:24:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:29.366 04:24:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:29.366 04:24:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.625 04:24:32 -- nvmf/common.sh@161 -- # true 00:09:29.625 04:24:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.625 04:24:32 -- nvmf/common.sh@162 -- # true 00:09:29.625 04:24:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.625 04:24:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.625 04:24:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.625 04:24:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.625 04:24:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.625 04:24:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.625 04:24:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.625 04:24:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:29.625 04:24:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:29.625 04:24:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:29.625 04:24:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:29.625 04:24:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:29.625 04:24:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:29.625 04:24:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.625 04:24:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.625 04:24:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.625 04:24:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:29.625 04:24:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:29.625 04:24:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.625 04:24:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.625 04:24:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.625 04:24:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.625 04:24:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.625 04:24:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:29.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:29.625 00:09:29.625 --- 10.0.0.2 ping statistics --- 00:09:29.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.625 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:29.625 04:24:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:29.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:09:29.625 00:09:29.625 --- 10.0.0.3 ping statistics --- 00:09:29.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.625 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:29.625 04:24:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:29.625 00:09:29.625 --- 10.0.0.1 ping statistics --- 00:09:29.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.625 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:29.625 04:24:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.625 04:24:32 -- nvmf/common.sh@421 -- # return 0 00:09:29.625 04:24:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:29.625 04:24:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.625 04:24:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:29.625 04:24:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:29.625 04:24:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.625 04:24:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:29.625 04:24:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:29.625 04:24:32 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:29.625 04:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:29.625 04:24:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.625 04:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:29.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.625 04:24:32 -- nvmf/common.sh@469 -- # nvmfpid=62653 00:09:29.625 04:24:32 -- nvmf/common.sh@470 -- # waitforlisten 62653 00:09:29.626 04:24:32 -- common/autotest_common.sh@829 -- # '[' -z 62653 ']' 00:09:29.626 04:24:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:29.626 04:24:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.626 04:24:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.626 04:24:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.626 04:24:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.626 04:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:29.884 [2024-12-07 04:24:32.886358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:29.884 [2024-12-07 04:24:32.886457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.884 [2024-12-07 04:24:33.024806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.884 [2024-12-07 04:24:33.073804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:29.884 [2024-12-07 04:24:33.073926] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.884 [2024-12-07 04:24:33.073939] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.884 [2024-12-07 04:24:33.073946] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.884 [2024-12-07 04:24:33.073975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.821 04:24:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.821 04:24:33 -- common/autotest_common.sh@862 -- # return 0 00:09:30.821 04:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:30.821 04:24:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.821 04:24:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.821 04:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.821 04:24:33 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:30.821 04:24:33 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:30.821 04:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.821 04:24:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.821 [2024-12-07 04:24:33.868438] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.821 04:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.821 04:24:33 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:30.821 04:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.821 04:24:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.821 04:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.821 04:24:33 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.821 04:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.821 04:24:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.821 [2024-12-07 04:24:33.884501] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.821 04:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.821 04:24:33 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.821 04:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.821 04:24:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.821 04:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.821 04:24:33 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:30.821 04:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.821 04:24:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.821 malloc0 00:09:30.821 04:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.821 04:24:33 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:30.821 04:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.821 04:24:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.821 04:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.821 04:24:33 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:30.821 04:24:33 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:30.821 04:24:33 -- nvmf/common.sh@520 -- # config=() 00:09:30.821 04:24:33 -- nvmf/common.sh@520 -- # local subsystem config 00:09:30.821 04:24:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:30.821 04:24:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:30.821 { 00:09:30.822 "params": { 00:09:30.822 "name": "Nvme$subsystem", 00:09:30.822 "trtype": "$TEST_TRANSPORT", 00:09:30.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.822 "adrfam": "ipv4", 00:09:30.822 "trsvcid": "$NVMF_PORT", 00:09:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.822 "hdgst": ${hdgst:-false}, 00:09:30.822 "ddgst": ${ddgst:-false} 00:09:30.822 }, 00:09:30.822 "method": "bdev_nvme_attach_controller" 00:09:30.822 } 00:09:30.822 EOF 00:09:30.822 )") 00:09:30.822 04:24:33 -- nvmf/common.sh@542 -- # cat 00:09:30.822 04:24:33 -- nvmf/common.sh@544 -- # jq . 00:09:30.822 04:24:33 -- nvmf/common.sh@545 -- # IFS=, 00:09:30.822 04:24:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:30.822 "params": { 00:09:30.822 "name": "Nvme1", 00:09:30.822 "trtype": "tcp", 00:09:30.822 "traddr": "10.0.0.2", 00:09:30.822 "adrfam": "ipv4", 00:09:30.822 "trsvcid": "4420", 00:09:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.822 "hdgst": false, 00:09:30.822 "ddgst": false 00:09:30.822 }, 00:09:30.822 "method": "bdev_nvme_attach_controller" 00:09:30.822 }' 00:09:30.822 [2024-12-07 04:24:33.970769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:30.822 [2024-12-07 04:24:33.970860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62686 ] 00:09:31.080 [2024-12-07 04:24:34.111807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.080 [2024-12-07 04:24:34.180296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.339 Running I/O for 10 seconds... 00:09:41.311 00:09:41.311 Latency(us) 00:09:41.311 [2024-12-07T04:24:44.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.311 [2024-12-07T04:24:44.551Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:41.311 Verification LBA range: start 0x0 length 0x1000 00:09:41.311 Nvme1n1 : 10.01 10021.83 78.30 0.00 0.00 12739.08 1042.62 19899.11 00:09:41.311 [2024-12-07T04:24:44.551Z] =================================================================================================================== 00:09:41.311 [2024-12-07T04:24:44.551Z] Total : 10021.83 78.30 0.00 0.00 12739.08 1042.62 19899.11 00:09:41.311 04:24:44 -- target/zcopy.sh@39 -- # perfpid=62809 00:09:41.311 04:24:44 -- target/zcopy.sh@41 -- # xtrace_disable 00:09:41.311 04:24:44 -- common/autotest_common.sh@10 -- # set +x 00:09:41.311 04:24:44 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:41.311 04:24:44 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:41.311 04:24:44 -- nvmf/common.sh@520 -- # config=() 00:09:41.311 04:24:44 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.311 04:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.311 04:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.311 { 00:09:41.311 "params": { 00:09:41.311 "name": "Nvme$subsystem", 00:09:41.311 "trtype": "$TEST_TRANSPORT", 00:09:41.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.311 "adrfam": "ipv4", 00:09:41.311 "trsvcid": "$NVMF_PORT", 00:09:41.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.311 "hdgst": ${hdgst:-false}, 00:09:41.311 "ddgst": ${ddgst:-false} 00:09:41.311 }, 00:09:41.311 "method": "bdev_nvme_attach_controller" 00:09:41.311 } 00:09:41.311 EOF 00:09:41.311 )") 00:09:41.311 [2024-12-07 04:24:44.517313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-12-07 04:24:44.517356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 04:24:44 -- nvmf/common.sh@542 -- # cat 00:09:41.311 04:24:44 -- nvmf/common.sh@544 -- # jq . 00:09:41.311 04:24:44 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.311 04:24:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.311 "params": { 00:09:41.311 "name": "Nvme1", 00:09:41.311 "trtype": "tcp", 00:09:41.311 "traddr": "10.0.0.2", 00:09:41.311 "adrfam": "ipv4", 00:09:41.311 "trsvcid": "4420", 00:09:41.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.311 "hdgst": false, 00:09:41.311 "ddgst": false 00:09:41.311 }, 00:09:41.311 "method": "bdev_nvme_attach_controller" 00:09:41.311 }' 00:09:41.311 [2024-12-07 04:24:44.529273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-12-07 04:24:44.529301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-12-07 04:24:44.537270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-12-07 04:24:44.537295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.549293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.549321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.557283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.557313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.565551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.572 [2024-12-07 04:24:44.565675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62809 ] 00:09:41.572 [2024-12-07 04:24:44.569276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.569301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.581277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.581301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.593282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.593305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.605283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.605306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.617284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.617308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.629305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.629332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.641311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.641342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.653301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.653327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.665301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.665324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.677302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.677326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.689323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.689351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.701379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.701415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.703286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.572 [2024-12-07 04:24:44.709357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.709386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.717339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.717364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.725343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.725367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.733349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.733374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.741352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.741378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.749349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.749373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.756162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.572 [2024-12-07 04:24:44.757352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.757376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.765366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.765533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.773383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.773416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.781382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.781416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.789379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.789413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.797365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.797392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.572 [2024-12-07 04:24:44.805446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.572 [2024-12-07 04:24:44.805482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.813382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.813427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.821399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.821428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.829401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.829429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.837403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.837432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.849419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.849447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.857420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.857450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.865429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.865473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.873448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.873475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.881468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.881498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 Running I/O for 5 seconds... 00:09:41.833 [2024-12-07 04:24:44.889443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.889470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.901130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.901194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.909525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.909556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.921947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.922008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.931603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.931637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.945858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.945914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.955491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.955535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.969781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.969814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.981511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.981543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:44.989963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:44.990025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:45.002478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:45.002509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:45.012814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:45.012847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:45.022324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:45.022356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:45.032192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:45.032366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:45.043912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:45.043965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:45.054508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:45.054702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.833 [2024-12-07 04:24:45.065418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.833 [2024-12-07 04:24:45.065451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.091 [2024-12-07 04:24:45.076631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.091 [2024-12-07 04:24:45.076728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.091 [2024-12-07 04:24:45.091745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.091 [2024-12-07 04:24:45.091781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.091 [2024-12-07 04:24:45.100960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.091 [2024-12-07 04:24:45.101007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.091 [2024-12-07 04:24:45.116656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.091 [2024-12-07 04:24:45.116718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.126289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.126546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.140454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.140498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.156201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.156234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.165744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.165797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.177157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.177339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.188819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.188850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.197217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.197248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.209168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.209199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.218897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.218928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.228173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.228349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.237955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.238002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.247331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.247538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.257502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.257535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.267905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.267967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.279903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.279935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.289064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.289095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.301013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.301044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.312326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.312358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.092 [2024-12-07 04:24:45.329014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.092 [2024-12-07 04:24:45.329182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.339351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.339389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.349186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.349217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.358267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.358298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.368227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.368401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.378286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.378318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.392348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.392380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.400736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.400789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.412505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.412536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.424270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.424300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.432424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.432455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.443968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.443999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.454576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.454607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.463152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.463344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.475525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.475559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.484754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.484786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.498197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.498373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.507348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.507402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.518474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.518506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.529303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.529482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.540771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.540936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.556580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.556802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.566430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.566464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.350 [2024-12-07 04:24:45.579211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.350 [2024-12-07 04:24:45.579243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.589776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.589857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.601055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.601087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.617890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.617921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.636252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.636416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.646695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.646887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.656722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.656914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.666710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.666904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.677739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.677925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.690024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.690213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.699084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.699258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.711809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.712002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.721375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.721566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.736053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.736215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.746924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.747132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.763170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.763344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.773183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.773361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.783438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.783600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.793659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.793850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.804037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.804210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.814187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.814362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.824585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.824806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.834298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.834456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.610 [2024-12-07 04:24:45.845310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.610 [2024-12-07 04:24:45.845358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.857214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.857373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.865844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.866040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.878601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.878634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.888586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.888835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.903887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.903923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.914328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.914361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.929193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.929227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.938724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.938757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.953028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.953063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.961938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.962144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.975014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.975048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:45.990908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:45.990942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:46.000180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:46.000213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:46.010959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:46.011007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:46.023081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:46.023113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:46.032444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:46.032477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:46.044835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:46.044869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:46.053938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.869 [2024-12-07 04:24:46.053970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.869 [2024-12-07 04:24:46.065884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.870 [2024-12-07 04:24:46.065913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.870 [2024-12-07 04:24:46.075795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.870 [2024-12-07 04:24:46.075828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.870 [2024-12-07 04:24:46.086603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.870 [2024-12-07 04:24:46.086684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.870 [2024-12-07 04:24:46.098923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.870 [2024-12-07 04:24:46.099123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.108574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.108608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.123088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.123122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.132464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.132498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.143885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.143922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.154254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.154419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.165125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.165349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.178077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.178128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.187510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.187546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.200070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.200248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.210663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.210742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.224313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.224344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.233087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.233119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.247374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.247577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.256518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.256550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.266870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.266904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.276623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.276715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.286813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.286845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.296838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.296870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.306061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.306251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.315809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.315998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.325448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.325620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.336672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.336881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.348136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.348308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.129 [2024-12-07 04:24:46.357031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.129 [2024-12-07 04:24:46.357189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.367812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.367989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.380014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.380200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.388904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.389079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.403449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.403602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.412615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.412817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.422525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.422729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.432330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.432503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.442369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.442542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.452446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.452619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.462450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.462623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.472637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.472842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.482538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.482727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.492341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.492514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.502135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.502309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.512015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.512187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.521928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.521961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.532170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.532202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.542846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.542879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.554170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.554347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.564815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.564849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.576697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.576913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.587593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.587627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.596507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.596711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.610685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.610717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-12-07 04:24:46.619741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-12-07 04:24:46.619773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.635859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.635896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.645881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.645915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.656923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.657160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.674780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.674953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.689863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.690007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.699641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.699864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.714360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.714536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.724003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.724178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.738365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.738541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.747575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.647 [2024-12-07 04:24:46.747767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.647 [2024-12-07 04:24:46.759982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.760155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.769540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.769706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.784605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.784831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.793513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.793699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.806195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.806369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.816435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.816613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.826601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.826841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.836627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.836870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.848070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.848243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.856643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.856878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.868945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.869134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.648 [2024-12-07 04:24:46.879224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.648 [2024-12-07 04:24:46.879423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.890512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.890697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.903047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.903081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.912436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.912613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.929061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.929098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.938875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.938910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.953356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.953388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.962489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.962694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.972891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.972923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.983214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.983246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:46.993327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:46.993359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.003485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.003518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.013344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.013376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.023718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.023781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.033252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.033284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.045131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.045192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.054519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.054729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.067043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.067075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.076910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.076944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.086721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.086753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.096719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.096759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.106565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.106796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.116867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.116900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.127189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.127222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-12-07 04:24:47.139599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-12-07 04:24:47.139633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.150420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.150599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.162312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.162505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.172472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.172688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.182670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.182880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.192960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.193199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.203311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.203500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.213560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.213779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.227871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.228015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.243873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.244072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.253347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.253504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.265501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.265687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.275846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.276019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.285929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.286108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.300543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.300798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.309738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.309929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.325494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.325711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.333878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.334039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.346742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.346830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.363820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.363991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.373577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.373609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.383733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.383764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.393286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.393318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-12-07 04:24:47.404324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-12-07 04:24:47.404356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.413459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.413490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.425618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.425709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.436682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.436725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.445072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.445103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.456603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.456635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.466147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.466180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.481237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.481417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.490225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.490257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.502199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.502231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.513276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.513308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.521499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.521530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.533224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.533255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.545042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.545075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.553978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.554025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.566906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.566958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.576964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.577031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.587818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.587852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.598182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.598213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.608249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.608289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.618111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.618157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.628219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.628265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.638066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.638099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.647963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.648011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-12-07 04:24:47.656851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-12-07 04:24:47.656883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.685 [2024-12-07 04:24:47.670433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.685 [2024-12-07 04:24:47.670465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.685 [2024-12-07 04:24:47.679241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.685 [2024-12-07 04:24:47.679445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.685 [2024-12-07 04:24:47.689670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.689713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.698894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.698926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.710128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.710158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.723899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.723935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.734027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.734060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.745254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.745434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.755715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.755748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.765997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.766030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.776176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.776226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.786403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.786435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.800127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.800198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.809240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.809272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.819297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.819328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.828409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.828441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.838204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.838252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.847768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.847800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.857506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.857722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.866932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.866965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.876772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.876804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.886313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.886344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.896170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.896345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.906118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.906151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.686 [2024-12-07 04:24:47.915868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.686 [2024-12-07 04:24:47.915900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.945 [2024-12-07 04:24:47.930538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.945 [2024-12-07 04:24:47.930748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.945 [2024-12-07 04:24:47.941149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:47.941369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:47.956433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:47.956608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:47.967014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:47.967206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:47.977672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:47.977860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:47.989206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:47.989378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.005274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.005539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.015060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.015325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.029148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.029362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.038155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.038330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.052286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.052458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.061429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.061604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.071085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.071258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.081017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.081189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.090451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.090623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.100187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.100341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.109810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.109999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.119518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.119690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.129381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.129554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.139367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.139554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.148749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.148913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.158274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.158430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.168732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.168910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.946 [2024-12-07 04:24:48.178728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.946 [2024-12-07 04:24:48.178892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.205 [2024-12-07 04:24:48.189698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.205 [2024-12-07 04:24:48.189875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.205 [2024-12-07 04:24:48.199306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.205 [2024-12-07 04:24:48.199488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.205 [2024-12-07 04:24:48.209103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.205 [2024-12-07 04:24:48.209323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.205 [2024-12-07 04:24:48.223952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.205 [2024-12-07 04:24:48.224172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.205 [2024-12-07 04:24:48.233179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.205 [2024-12-07 04:24:48.233211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.205 [2024-12-07 04:24:48.244829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.205 [2024-12-07 04:24:48.244860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.205 [2024-12-07 04:24:48.255487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.205 [2024-12-07 04:24:48.255668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.263992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.264038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.275505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.275776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.285888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.285924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.295018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.295198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.304584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.304616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.314366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.314545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.329262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.329422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.339330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.339362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.349689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.349867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.359922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.360094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.370524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.370733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.380861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.381038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.390941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.391113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.400870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.401047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.416394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.416665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.426216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.426394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.206 [2024-12-07 04:24:48.436575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.206 [2024-12-07 04:24:48.436779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.447261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.466 [2024-12-07 04:24:48.447454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.457429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.466 [2024-12-07 04:24:48.457741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.471587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.466 [2024-12-07 04:24:48.471891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.480375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.466 [2024-12-07 04:24:48.480533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.492160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.466 [2024-12-07 04:24:48.492333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.501409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.466 [2024-12-07 04:24:48.501582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.510880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.466 [2024-12-07 04:24:48.511043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.466 [2024-12-07 04:24:48.524910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.525088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.533843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.534019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.544575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.544776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.556208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.556492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.572165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.572202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.588744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.588808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.605249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.605282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.614634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.614844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.624728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.624760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.634313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.634489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.644075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.644107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.653262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.653437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.662626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.662697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.672152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.672328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.681945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.682131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.691628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.691859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.467 [2024-12-07 04:24:48.701739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.467 [2024-12-07 04:24:48.701914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.715813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.715990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.724955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.725160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.738910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.739099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.753977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.754165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.771963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.772152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.781664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.781923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.796058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.796224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.805668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.805873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.820151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.820339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.837637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.837823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.848091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.848270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.861965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.862160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.871823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.872001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.886098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.886261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.897848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.898091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.906847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.906879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.917919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.917952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.927545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.927582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.937400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.937575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.947412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.947448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.727 [2024-12-07 04:24:48.958274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.727 [2024-12-07 04:24:48.958308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:48.972068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:48.972356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:48.982291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:48.982463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:48.992713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:48.992927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.004864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.005025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.013473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.013689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.026250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.026423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.037335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.037508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.053469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.053672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.064364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.064583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.072619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.072820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.083882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.084069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.095149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.095323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.110390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.110612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.128456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.128631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.139852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.140012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.149182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.149354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.161332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.161505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.172815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.172996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.181004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.181175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.192823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.192855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.204561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.204592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.213354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.987 [2024-12-07 04:24:49.213532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.987 [2024-12-07 04:24:49.224571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.224794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.236059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.236205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.249563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.249596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.260483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.260517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.275740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.275787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.293189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.293386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.303301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.303333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.315149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.315185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.325622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.325696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.336060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.336246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.345911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.345948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.355241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.355427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.365169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.365202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.380485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.380518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.390493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.390701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.405289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.405324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.416057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.416094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.431472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.431506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.447154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.447187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.464403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.464436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.474112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.474276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.254 [2024-12-07 04:24:49.484555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.254 [2024-12-07 04:24:49.484588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.497609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.497840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.507015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.507185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.516658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.516867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.527096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.527278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.537334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.537509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.547623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.547913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.558204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.558422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.568042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.568215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.577598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.577803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.513 [2024-12-07 04:24:49.586976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.513 [2024-12-07 04:24:49.587162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.597145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.597324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.608102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.608296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.618746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.618891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.631641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.631799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.648737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.648951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.659417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.659596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.667126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.667299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.679190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.679362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.690103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.690275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.698472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.698688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.710322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.710495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.722529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.722738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.731116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.731342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.741559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.514 [2024-12-07 04:24:49.741774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.514 [2024-12-07 04:24:49.752067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.773 [2024-12-07 04:24:49.752267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.773 [2024-12-07 04:24:49.762050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.773 [2024-12-07 04:24:49.762226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.773 [2024-12-07 04:24:49.772487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.773 [2024-12-07 04:24:49.772691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.773 [2024-12-07 04:24:49.782992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.773 [2024-12-07 04:24:49.783295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.773 [2024-12-07 04:24:49.797888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.773 [2024-12-07 04:24:49.798173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.807353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.807549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.819928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.820120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.829349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.829527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.839327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.839547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.849037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.849215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.859205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.859238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.868628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.868863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.879026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.879059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.888918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.888984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.899553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.899606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 00:09:46.774 Latency(us) 00:09:46.774 [2024-12-07T04:24:50.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.774 [2024-12-07T04:24:50.014Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:46.774 Nvme1n1 : 5.01 12481.62 97.51 0.00 0.00 10242.41 4110.89 19779.96 00:09:46.774 [2024-12-07T04:24:50.014Z] =================================================================================================================== 00:09:46.774 [2024-12-07T04:24:50.014Z] Total : 12481.62 97.51 0.00 0.00 10242.41 4110.89 19779.96 00:09:46.774 [2024-12-07 04:24:49.907532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.907703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.915565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.915722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.923566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.923724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.931603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.931892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.943618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.943859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.951595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.951854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.963610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.963916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.975621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.975688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.983587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.983622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:49.999591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:49.999627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.774 [2024-12-07 04:24:50.007588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.774 [2024-12-07 04:24:50.007617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.033 [2024-12-07 04:24:50.015599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.033 [2024-12-07 04:24:50.015636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.023610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.023665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.031611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.031663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.043636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.043702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.051624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.051986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.059619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.059766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.071632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.071922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.079605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.079743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 [2024-12-07 04:24:50.087601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.034 [2024-12-07 04:24:50.087757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.034 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62809) - No such process 00:09:47.034 04:24:50 -- target/zcopy.sh@49 -- # wait 62809 00:09:47.034 04:24:50 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.034 04:24:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.034 04:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:47.034 04:24:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.034 04:24:50 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.034 04:24:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.034 04:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:47.034 delay0 00:09:47.034 04:24:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.034 04:24:50 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:47.034 04:24:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.034 04:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:47.034 04:24:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.034 04:24:50 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:47.293 [2024-12-07 04:24:50.290742] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.939 Initializing NVMe Controllers 00:09:53.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.939 Initialization complete. Launching workers. 00:09:53.939 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 295 00:09:53.939 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 582, failed to submit 33 00:09:53.939 success 455, unsuccess 127, failed 0 00:09:53.939 04:24:56 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:53.939 04:24:56 -- target/zcopy.sh@60 -- # nvmftestfini 00:09:53.939 04:24:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:53.939 04:24:56 -- nvmf/common.sh@116 -- # sync 00:09:53.939 04:24:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:53.939 04:24:56 -- nvmf/common.sh@119 -- # set +e 00:09:53.939 04:24:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:53.939 04:24:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:53.939 rmmod nvme_tcp 00:09:53.939 rmmod nvme_fabrics 00:09:53.939 rmmod nvme_keyring 00:09:53.939 04:24:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:53.939 04:24:56 -- nvmf/common.sh@123 -- # set -e 00:09:53.939 04:24:56 -- nvmf/common.sh@124 -- # return 0 00:09:53.939 04:24:56 -- nvmf/common.sh@477 -- # '[' -n 62653 ']' 00:09:53.939 04:24:56 -- nvmf/common.sh@478 -- # killprocess 62653 00:09:53.939 04:24:56 -- common/autotest_common.sh@936 -- # '[' -z 62653 ']' 00:09:53.939 04:24:56 -- common/autotest_common.sh@940 -- # kill -0 62653 00:09:53.939 04:24:56 -- common/autotest_common.sh@941 -- # uname 00:09:53.939 04:24:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:53.939 04:24:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62653 00:09:53.939 killing process with pid 62653 00:09:53.939 04:24:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:53.939 04:24:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:53.939 04:24:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62653' 00:09:53.939 04:24:56 -- common/autotest_common.sh@955 -- # kill 62653 00:09:53.939 04:24:56 -- common/autotest_common.sh@960 -- # wait 62653 00:09:53.939 04:24:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:53.939 04:24:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:53.939 04:24:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:53.939 04:24:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.939 04:24:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:53.939 04:24:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.939 04:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.939 04:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.939 04:24:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:53.939 00:09:53.939 real 0m24.531s 00:09:53.939 user 0m40.331s 00:09:53.939 sys 0m6.451s 00:09:53.939 ************************************ 00:09:53.939 END TEST nvmf_zcopy 00:09:53.939 ************************************ 00:09:53.939 04:24:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.939 04:24:56 -- common/autotest_common.sh@10 -- # set +x 00:09:53.939 04:24:56 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.939 04:24:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:53.939 04:24:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.939 04:24:56 -- common/autotest_common.sh@10 -- # set +x 00:09:53.939 ************************************ 00:09:53.939 START TEST nvmf_nmic 00:09:53.939 ************************************ 00:09:53.939 04:24:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.939 * Looking for test storage... 00:09:53.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.939 04:24:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:53.939 04:24:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:53.939 04:24:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:53.939 04:24:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:53.939 04:24:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:53.939 04:24:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:53.939 04:24:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:53.939 04:24:56 -- scripts/common.sh@335 -- # IFS=.-: 00:09:53.939 04:24:56 -- scripts/common.sh@335 -- # read -ra ver1 00:09:53.939 04:24:56 -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.939 04:24:56 -- scripts/common.sh@336 -- # read -ra ver2 00:09:53.939 04:24:56 -- scripts/common.sh@337 -- # local 'op=<' 00:09:53.939 04:24:56 -- scripts/common.sh@339 -- # ver1_l=2 00:09:53.939 04:24:56 -- scripts/common.sh@340 -- # ver2_l=1 00:09:53.939 04:24:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:53.939 04:24:56 -- scripts/common.sh@343 -- # case "$op" in 00:09:53.939 04:24:56 -- scripts/common.sh@344 -- # : 1 00:09:53.939 04:24:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:53.939 04:24:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.939 04:24:56 -- scripts/common.sh@364 -- # decimal 1 00:09:53.939 04:24:56 -- scripts/common.sh@352 -- # local d=1 00:09:53.939 04:24:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.939 04:24:56 -- scripts/common.sh@354 -- # echo 1 00:09:53.939 04:24:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:53.939 04:24:56 -- scripts/common.sh@365 -- # decimal 2 00:09:53.939 04:24:56 -- scripts/common.sh@352 -- # local d=2 00:09:53.939 04:24:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.939 04:24:56 -- scripts/common.sh@354 -- # echo 2 00:09:53.939 04:24:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:53.939 04:24:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:53.939 04:24:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:53.939 04:24:56 -- scripts/common.sh@367 -- # return 0 00:09:53.939 04:24:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.939 04:24:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.939 --rc genhtml_branch_coverage=1 00:09:53.939 --rc genhtml_function_coverage=1 00:09:53.939 --rc genhtml_legend=1 00:09:53.939 --rc geninfo_all_blocks=1 00:09:53.939 --rc geninfo_unexecuted_blocks=1 00:09:53.939 00:09:53.939 ' 00:09:53.940 04:24:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:53.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.940 --rc genhtml_branch_coverage=1 00:09:53.940 --rc genhtml_function_coverage=1 00:09:53.940 --rc genhtml_legend=1 00:09:53.940 --rc geninfo_all_blocks=1 00:09:53.940 --rc geninfo_unexecuted_blocks=1 00:09:53.940 00:09:53.940 ' 00:09:53.940 04:24:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:53.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.940 --rc genhtml_branch_coverage=1 00:09:53.940 --rc genhtml_function_coverage=1 00:09:53.940 --rc genhtml_legend=1 00:09:53.940 --rc geninfo_all_blocks=1 00:09:53.940 --rc geninfo_unexecuted_blocks=1 00:09:53.940 00:09:53.940 ' 00:09:53.940 04:24:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:53.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.940 --rc genhtml_branch_coverage=1 00:09:53.940 --rc genhtml_function_coverage=1 00:09:53.940 --rc genhtml_legend=1 00:09:53.940 --rc geninfo_all_blocks=1 00:09:53.940 --rc geninfo_unexecuted_blocks=1 00:09:53.940 00:09:53.940 ' 00:09:53.940 04:24:56 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.940 04:24:56 -- nvmf/common.sh@7 -- # uname -s 00:09:53.940 04:24:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.940 04:24:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.940 04:24:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.940 04:24:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.940 04:24:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.940 04:24:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.940 04:24:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.940 04:24:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.940 04:24:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.940 04:24:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.940 04:24:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:53.940 04:24:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:53.940 04:24:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.940 04:24:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.940 04:24:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.940 04:24:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.940 04:24:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.940 04:24:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.940 04:24:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.940 04:24:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.940 04:24:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.940 04:24:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.940 04:24:56 -- paths/export.sh@5 -- # export PATH 00:09:53.940 04:24:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.940 04:24:56 -- nvmf/common.sh@46 -- # : 0 00:09:53.940 04:24:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:53.940 04:24:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:53.940 04:24:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:53.940 04:24:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.940 04:24:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.940 04:24:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:53.940 04:24:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:53.940 04:24:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:53.940 04:24:56 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.940 04:24:56 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.940 04:24:56 -- target/nmic.sh@14 -- # nvmftestinit 00:09:53.940 04:24:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:53.940 04:24:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.940 04:24:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:53.940 04:24:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:53.940 04:24:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:53.940 04:24:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.940 04:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.940 04:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.940 04:24:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:53.940 04:24:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:53.940 04:24:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:53.940 04:24:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:53.940 04:24:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:53.940 04:24:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:53.940 04:24:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.940 04:24:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.940 04:24:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:53.940 04:24:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:53.940 04:24:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.940 04:24:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.940 04:24:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.940 04:24:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.940 04:24:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.940 04:24:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.940 04:24:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.940 04:24:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.940 04:24:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:53.940 04:24:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:53.940 Cannot find device "nvmf_tgt_br" 00:09:53.940 04:24:57 -- nvmf/common.sh@154 -- # true 00:09:53.940 04:24:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.940 Cannot find device "nvmf_tgt_br2" 00:09:53.940 04:24:57 -- nvmf/common.sh@155 -- # true 00:09:53.940 04:24:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:53.940 04:24:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:53.940 Cannot find device "nvmf_tgt_br" 00:09:53.940 04:24:57 -- nvmf/common.sh@157 -- # true 00:09:53.940 04:24:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:53.940 Cannot find device "nvmf_tgt_br2" 00:09:53.940 04:24:57 -- nvmf/common.sh@158 -- # true 00:09:53.940 04:24:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:53.940 04:24:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:53.940 04:24:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.940 04:24:57 -- nvmf/common.sh@161 -- # true 00:09:53.940 04:24:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.940 04:24:57 -- nvmf/common.sh@162 -- # true 00:09:53.940 04:24:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.940 04:24:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.940 04:24:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.940 04:24:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.940 04:24:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.940 04:24:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:54.200 04:24:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:54.200 04:24:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:54.200 04:24:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:54.200 04:24:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:54.200 04:24:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:54.200 04:24:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:54.200 04:24:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:54.200 04:24:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:54.200 04:24:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:54.200 04:24:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:54.200 04:24:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:54.200 04:24:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:54.200 04:24:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.200 04:24:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.200 04:24:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.200 04:24:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.200 04:24:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.200 04:24:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:54.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:09:54.200 00:09:54.200 --- 10.0.0.2 ping statistics --- 00:09:54.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.200 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:54.200 04:24:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:54.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:54.200 00:09:54.200 --- 10.0.0.3 ping statistics --- 00:09:54.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.200 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:54.200 04:24:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:54.200 00:09:54.200 --- 10.0.0.1 ping statistics --- 00:09:54.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.200 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:54.200 04:24:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.200 04:24:57 -- nvmf/common.sh@421 -- # return 0 00:09:54.200 04:24:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:54.200 04:24:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.200 04:24:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:54.200 04:24:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:54.200 04:24:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.200 04:24:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:54.200 04:24:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:54.200 04:24:57 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:54.200 04:24:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:54.200 04:24:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.200 04:24:57 -- common/autotest_common.sh@10 -- # set +x 00:09:54.200 04:24:57 -- nvmf/common.sh@469 -- # nvmfpid=63134 00:09:54.200 04:24:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.200 04:24:57 -- nvmf/common.sh@470 -- # waitforlisten 63134 00:09:54.200 04:24:57 -- common/autotest_common.sh@829 -- # '[' -z 63134 ']' 00:09:54.200 04:24:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.200 04:24:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.200 04:24:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.200 04:24:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.200 04:24:57 -- common/autotest_common.sh@10 -- # set +x 00:09:54.200 [2024-12-07 04:24:57.380658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:54.200 [2024-12-07 04:24:57.380765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.460 [2024-12-07 04:24:57.519610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.460 [2024-12-07 04:24:57.589552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:54.460 [2024-12-07 04:24:57.589746] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.460 [2024-12-07 04:24:57.589764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.460 [2024-12-07 04:24:57.589775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.460 [2024-12-07 04:24:57.589905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.460 [2024-12-07 04:24:57.593683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.460 [2024-12-07 04:24:57.593815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.460 [2024-12-07 04:24:57.593827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.396 04:24:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.396 04:24:58 -- common/autotest_common.sh@862 -- # return 0 00:09:55.396 04:24:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:55.396 04:24:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.396 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.396 04:24:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.396 04:24:58 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.396 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.396 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.396 [2024-12-07 04:24:58.468765] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.396 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.396 04:24:58 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.396 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.396 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.396 Malloc0 00:09:55.396 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.396 04:24:58 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.396 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.396 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.396 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.396 04:24:58 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.396 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.396 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.396 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.396 04:24:58 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.397 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.397 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.397 [2024-12-07 04:24:58.533587] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.397 test case1: single bdev can't be used in multiple subsystems 00:09:55.397 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.397 04:24:58 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:55.397 04:24:58 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:55.397 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.397 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.397 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.397 04:24:58 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:55.397 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.397 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.397 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.397 04:24:58 -- target/nmic.sh@28 -- # nmic_status=0 00:09:55.397 04:24:58 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:55.397 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.397 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.397 [2024-12-07 04:24:58.557459] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:55.397 [2024-12-07 04:24:58.557500] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:55.397 [2024-12-07 04:24:58.557513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.397 request: 00:09:55.397 { 00:09:55.397 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:55.397 "namespace": { 00:09:55.397 "bdev_name": "Malloc0" 00:09:55.397 }, 00:09:55.397 "method": "nvmf_subsystem_add_ns", 00:09:55.397 "req_id": 1 00:09:55.397 } 00:09:55.397 Got JSON-RPC error response 00:09:55.397 response: 00:09:55.397 { 00:09:55.397 "code": -32602, 00:09:55.397 "message": "Invalid parameters" 00:09:55.397 } 00:09:55.397 Adding namespace failed - expected result. 00:09:55.397 test case2: host connect to nvmf target in multiple paths 00:09:55.397 04:24:58 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:55.397 04:24:58 -- target/nmic.sh@29 -- # nmic_status=1 00:09:55.397 04:24:58 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:55.397 04:24:58 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:55.397 04:24:58 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:55.397 04:24:58 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:55.397 04:24:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.397 04:24:58 -- common/autotest_common.sh@10 -- # set +x 00:09:55.397 [2024-12-07 04:24:58.569591] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:55.397 04:24:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.397 04:24:58 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.655 04:24:58 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:55.656 04:24:58 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:55.656 04:24:58 -- common/autotest_common.sh@1187 -- # local i=0 00:09:55.656 04:24:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.656 04:24:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:55.656 04:24:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:58.210 04:25:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:58.210 04:25:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:58.210 04:25:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.210 04:25:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:58.210 04:25:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.210 04:25:00 -- common/autotest_common.sh@1197 -- # return 0 00:09:58.210 04:25:00 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:58.210 [global] 00:09:58.210 thread=1 00:09:58.210 invalidate=1 00:09:58.210 rw=write 00:09:58.210 time_based=1 00:09:58.210 runtime=1 00:09:58.210 ioengine=libaio 00:09:58.210 direct=1 00:09:58.210 bs=4096 00:09:58.210 iodepth=1 00:09:58.210 norandommap=0 00:09:58.210 numjobs=1 00:09:58.210 00:09:58.210 verify_dump=1 00:09:58.210 verify_backlog=512 00:09:58.210 verify_state_save=0 00:09:58.210 do_verify=1 00:09:58.210 verify=crc32c-intel 00:09:58.210 [job0] 00:09:58.210 filename=/dev/nvme0n1 00:09:58.210 Could not set queue depth (nvme0n1) 00:09:58.210 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.210 fio-3.35 00:09:58.210 Starting 1 thread 00:09:59.142 00:09:59.142 job0: (groupid=0, jobs=1): err= 0: pid=63226: Sat Dec 7 04:25:02 2024 00:09:59.142 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:59.142 slat (nsec): min=10702, max=61493, avg=13298.73, stdev=4254.50 00:09:59.142 clat (usec): min=129, max=501, avg=168.62, stdev=19.61 00:09:59.142 lat (usec): min=141, max=513, avg=181.92, stdev=20.21 00:09:59.142 clat percentiles (usec): 00:09:59.142 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:09:59.142 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:09:59.142 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 00:09:59.142 | 99.00th=[ 223], 99.50th=[ 239], 99.90th=[ 255], 99.95th=[ 269], 00:09:59.142 | 99.99th=[ 502] 00:09:59.142 write: IOPS=3389, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1001msec); 0 zone resets 00:09:59.142 slat (nsec): min=13720, max=93759, avg=20419.48, stdev=6312.66 00:09:59.142 clat (usec): min=80, max=248, avg=106.75, stdev=14.71 00:09:59.142 lat (usec): min=97, max=341, avg=127.16, stdev=16.88 00:09:59.142 clat percentiles (usec): 00:09:59.142 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 95], 00:09:59.142 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 108], 00:09:59.142 | 70.00th=[ 112], 80.00th=[ 119], 90.00th=[ 127], 95.00th=[ 135], 00:09:59.142 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 184], 00:09:59.142 | 99.99th=[ 249] 00:09:59.142 bw ( KiB/s): min=13352, max=13352, per=98.48%, avg=13352.00, stdev= 0.00, samples=1 00:09:59.142 iops : min= 3338, max= 3338, avg=3338.00, stdev= 0.00, samples=1 00:09:59.142 lat (usec) : 100=20.05%, 250=79.86%, 500=0.08%, 750=0.02% 00:09:59.142 cpu : usr=2.30%, sys=8.60%, ctx=6465, majf=0, minf=5 00:09:59.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.142 issued rwts: total=3072,3393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.142 00:09:59.142 Run status group 0 (all jobs): 00:09:59.142 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:59.142 WRITE: bw=13.2MiB/s (13.9MB/s), 13.2MiB/s-13.2MiB/s (13.9MB/s-13.9MB/s), io=13.3MiB (13.9MB), run=1001-1001msec 00:09:59.142 00:09:59.142 Disk stats (read/write): 00:09:59.142 nvme0n1: ios=2784/3072, merge=0/0, ticks=502/377, in_queue=879, util=91.28% 00:09:59.142 04:25:02 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:59.142 04:25:02 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.142 04:25:02 -- common/autotest_common.sh@1208 -- # local i=0 00:09:59.142 04:25:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:59.142 04:25:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.142 04:25:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:59.142 04:25:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.142 04:25:02 -- common/autotest_common.sh@1220 -- # return 0 00:09:59.142 04:25:02 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:59.142 04:25:02 -- target/nmic.sh@53 -- # nvmftestfini 00:09:59.142 04:25:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:59.142 04:25:02 -- nvmf/common.sh@116 -- # sync 00:09:59.142 04:25:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:59.142 04:25:02 -- nvmf/common.sh@119 -- # set +e 00:09:59.142 04:25:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:59.142 04:25:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:59.142 rmmod nvme_tcp 00:09:59.142 rmmod nvme_fabrics 00:09:59.142 rmmod nvme_keyring 00:09:59.142 04:25:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:59.142 04:25:02 -- nvmf/common.sh@123 -- # set -e 00:09:59.142 04:25:02 -- nvmf/common.sh@124 -- # return 0 00:09:59.142 04:25:02 -- nvmf/common.sh@477 -- # '[' -n 63134 ']' 00:09:59.142 04:25:02 -- nvmf/common.sh@478 -- # killprocess 63134 00:09:59.142 04:25:02 -- common/autotest_common.sh@936 -- # '[' -z 63134 ']' 00:09:59.142 04:25:02 -- common/autotest_common.sh@940 -- # kill -0 63134 00:09:59.142 04:25:02 -- common/autotest_common.sh@941 -- # uname 00:09:59.142 04:25:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:59.142 04:25:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63134 00:09:59.399 04:25:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:59.399 killing process with pid 63134 00:09:59.399 04:25:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:59.399 04:25:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63134' 00:09:59.399 04:25:02 -- common/autotest_common.sh@955 -- # kill 63134 00:09:59.399 04:25:02 -- common/autotest_common.sh@960 -- # wait 63134 00:09:59.399 04:25:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:59.399 04:25:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:59.399 04:25:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:59.399 04:25:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.399 04:25:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:59.399 04:25:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.399 04:25:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.399 04:25:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.399 04:25:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:59.399 00:09:59.399 real 0m5.833s 00:09:59.399 user 0m18.893s 00:09:59.399 sys 0m2.169s 00:09:59.399 04:25:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:59.399 04:25:02 -- common/autotest_common.sh@10 -- # set +x 00:09:59.399 ************************************ 00:09:59.399 END TEST nvmf_nmic 00:09:59.399 ************************************ 00:09:59.658 04:25:02 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.658 04:25:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:59.658 04:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.658 04:25:02 -- common/autotest_common.sh@10 -- # set +x 00:09:59.658 ************************************ 00:09:59.658 START TEST nvmf_fio_target 00:09:59.658 ************************************ 00:09:59.658 04:25:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.658 * Looking for test storage... 00:09:59.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.658 04:25:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:59.658 04:25:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:59.658 04:25:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:59.658 04:25:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:59.658 04:25:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:59.658 04:25:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:59.658 04:25:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:59.658 04:25:02 -- scripts/common.sh@335 -- # IFS=.-: 00:09:59.658 04:25:02 -- scripts/common.sh@335 -- # read -ra ver1 00:09:59.658 04:25:02 -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.658 04:25:02 -- scripts/common.sh@336 -- # read -ra ver2 00:09:59.658 04:25:02 -- scripts/common.sh@337 -- # local 'op=<' 00:09:59.658 04:25:02 -- scripts/common.sh@339 -- # ver1_l=2 00:09:59.658 04:25:02 -- scripts/common.sh@340 -- # ver2_l=1 00:09:59.658 04:25:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:59.658 04:25:02 -- scripts/common.sh@343 -- # case "$op" in 00:09:59.658 04:25:02 -- scripts/common.sh@344 -- # : 1 00:09:59.658 04:25:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:59.658 04:25:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.658 04:25:02 -- scripts/common.sh@364 -- # decimal 1 00:09:59.658 04:25:02 -- scripts/common.sh@352 -- # local d=1 00:09:59.658 04:25:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.658 04:25:02 -- scripts/common.sh@354 -- # echo 1 00:09:59.658 04:25:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:59.658 04:25:02 -- scripts/common.sh@365 -- # decimal 2 00:09:59.658 04:25:02 -- scripts/common.sh@352 -- # local d=2 00:09:59.658 04:25:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.658 04:25:02 -- scripts/common.sh@354 -- # echo 2 00:09:59.658 04:25:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:59.658 04:25:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:59.658 04:25:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:59.658 04:25:02 -- scripts/common.sh@367 -- # return 0 00:09:59.658 04:25:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.658 04:25:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:59.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.658 --rc genhtml_branch_coverage=1 00:09:59.658 --rc genhtml_function_coverage=1 00:09:59.658 --rc genhtml_legend=1 00:09:59.658 --rc geninfo_all_blocks=1 00:09:59.658 --rc geninfo_unexecuted_blocks=1 00:09:59.658 00:09:59.658 ' 00:09:59.658 04:25:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:59.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.658 --rc genhtml_branch_coverage=1 00:09:59.658 --rc genhtml_function_coverage=1 00:09:59.658 --rc genhtml_legend=1 00:09:59.658 --rc geninfo_all_blocks=1 00:09:59.658 --rc geninfo_unexecuted_blocks=1 00:09:59.658 00:09:59.658 ' 00:09:59.658 04:25:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:59.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.658 --rc genhtml_branch_coverage=1 00:09:59.658 --rc genhtml_function_coverage=1 00:09:59.658 --rc genhtml_legend=1 00:09:59.658 --rc geninfo_all_blocks=1 00:09:59.658 --rc geninfo_unexecuted_blocks=1 00:09:59.658 00:09:59.658 ' 00:09:59.658 04:25:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:59.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.658 --rc genhtml_branch_coverage=1 00:09:59.658 --rc genhtml_function_coverage=1 00:09:59.658 --rc genhtml_legend=1 00:09:59.658 --rc geninfo_all_blocks=1 00:09:59.658 --rc geninfo_unexecuted_blocks=1 00:09:59.658 00:09:59.658 ' 00:09:59.658 04:25:02 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.658 04:25:02 -- nvmf/common.sh@7 -- # uname -s 00:09:59.658 04:25:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.658 04:25:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.658 04:25:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.658 04:25:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.658 04:25:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.658 04:25:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.658 04:25:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.658 04:25:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.658 04:25:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.658 04:25:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.658 04:25:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:59.658 04:25:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:09:59.658 04:25:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.658 04:25:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.658 04:25:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.658 04:25:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.658 04:25:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.658 04:25:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.658 04:25:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.658 04:25:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.658 04:25:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.658 04:25:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.659 04:25:02 -- paths/export.sh@5 -- # export PATH 00:09:59.659 04:25:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.659 04:25:02 -- nvmf/common.sh@46 -- # : 0 00:09:59.659 04:25:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:59.659 04:25:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:59.659 04:25:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:59.659 04:25:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.659 04:25:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.659 04:25:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:59.659 04:25:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:59.659 04:25:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:59.659 04:25:02 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.659 04:25:02 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.659 04:25:02 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.659 04:25:02 -- target/fio.sh@16 -- # nvmftestinit 00:09:59.659 04:25:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:59.659 04:25:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.659 04:25:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:59.659 04:25:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:59.659 04:25:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:59.659 04:25:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.659 04:25:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.659 04:25:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.659 04:25:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:59.659 04:25:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:59.659 04:25:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:59.659 04:25:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:59.659 04:25:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:59.659 04:25:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:59.659 04:25:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.659 04:25:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.659 04:25:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:59.659 04:25:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:59.659 04:25:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.659 04:25:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.659 04:25:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.659 04:25:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.659 04:25:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.659 04:25:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.659 04:25:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.659 04:25:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.659 04:25:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:59.918 04:25:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:59.918 Cannot find device "nvmf_tgt_br" 00:09:59.918 04:25:02 -- nvmf/common.sh@154 -- # true 00:09:59.918 04:25:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.918 Cannot find device "nvmf_tgt_br2" 00:09:59.918 04:25:02 -- nvmf/common.sh@155 -- # true 00:09:59.918 04:25:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:59.918 04:25:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:59.918 Cannot find device "nvmf_tgt_br" 00:09:59.918 04:25:02 -- nvmf/common.sh@157 -- # true 00:09:59.918 04:25:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:59.918 Cannot find device "nvmf_tgt_br2" 00:09:59.918 04:25:02 -- nvmf/common.sh@158 -- # true 00:09:59.918 04:25:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:59.918 04:25:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:59.918 04:25:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.918 04:25:03 -- nvmf/common.sh@161 -- # true 00:09:59.918 04:25:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.918 04:25:03 -- nvmf/common.sh@162 -- # true 00:09:59.918 04:25:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.918 04:25:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.918 04:25:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.918 04:25:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.918 04:25:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.918 04:25:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.918 04:25:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.918 04:25:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:59.918 04:25:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:59.918 04:25:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:59.918 04:25:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:59.918 04:25:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:59.918 04:25:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:59.918 04:25:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.918 04:25:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.918 04:25:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.178 04:25:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:00.178 04:25:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:00.178 04:25:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.178 04:25:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.178 04:25:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.178 04:25:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.178 04:25:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.178 04:25:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:00.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:00.178 00:10:00.178 --- 10.0.0.2 ping statistics --- 00:10:00.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.178 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:00.178 04:25:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:00.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:10:00.178 00:10:00.178 --- 10.0.0.3 ping statistics --- 00:10:00.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.178 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:00.178 04:25:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:00.178 00:10:00.178 --- 10.0.0.1 ping statistics --- 00:10:00.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.178 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:00.178 04:25:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.178 04:25:03 -- nvmf/common.sh@421 -- # return 0 00:10:00.178 04:25:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:00.178 04:25:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.178 04:25:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:00.178 04:25:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:00.178 04:25:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.178 04:25:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:00.178 04:25:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:00.178 04:25:03 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:00.178 04:25:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:00.178 04:25:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.178 04:25:03 -- common/autotest_common.sh@10 -- # set +x 00:10:00.178 04:25:03 -- nvmf/common.sh@469 -- # nvmfpid=63411 00:10:00.178 04:25:03 -- nvmf/common.sh@470 -- # waitforlisten 63411 00:10:00.178 04:25:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.178 04:25:03 -- common/autotest_common.sh@829 -- # '[' -z 63411 ']' 00:10:00.178 04:25:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.178 04:25:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.178 04:25:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.178 04:25:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.178 04:25:03 -- common/autotest_common.sh@10 -- # set +x 00:10:00.178 [2024-12-07 04:25:03.313629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:00.178 [2024-12-07 04:25:03.313970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.437 [2024-12-07 04:25:03.452868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.437 [2024-12-07 04:25:03.500590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:00.437 [2024-12-07 04:25:03.500990] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.437 [2024-12-07 04:25:03.501057] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.437 [2024-12-07 04:25:03.501317] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.437 [2024-12-07 04:25:03.501489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.437 [2024-12-07 04:25:03.501614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.437 [2024-12-07 04:25:03.501743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.437 [2024-12-07 04:25:03.501743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.374 04:25:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.374 04:25:04 -- common/autotest_common.sh@862 -- # return 0 00:10:01.374 04:25:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:01.374 04:25:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.374 04:25:04 -- common/autotest_common.sh@10 -- # set +x 00:10:01.374 04:25:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.374 04:25:04 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.374 [2024-12-07 04:25:04.570639] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.374 04:25:04 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.942 04:25:04 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:01.942 04:25:04 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.200 04:25:05 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:02.200 04:25:05 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.200 04:25:05 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:02.200 04:25:05 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.458 04:25:05 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:02.458 04:25:05 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:02.716 04:25:05 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.282 04:25:06 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:03.282 04:25:06 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.282 04:25:06 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:03.282 04:25:06 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.540 04:25:06 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:03.540 04:25:06 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:03.797 04:25:07 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.054 04:25:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:04.054 04:25:07 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.311 04:25:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:04.311 04:25:07 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.568 04:25:07 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.827 [2024-12-07 04:25:07.893748] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.827 04:25:07 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:05.085 04:25:08 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:05.359 04:25:08 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.359 04:25:08 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:05.359 04:25:08 -- common/autotest_common.sh@1187 -- # local i=0 00:10:05.359 04:25:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.359 04:25:08 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:10:05.359 04:25:08 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:10:05.359 04:25:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:07.890 04:25:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:07.890 04:25:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:07.890 04:25:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.890 04:25:10 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:07.890 04:25:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.890 04:25:10 -- common/autotest_common.sh@1197 -- # return 0 00:10:07.890 04:25:10 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.890 [global] 00:10:07.890 thread=1 00:10:07.890 invalidate=1 00:10:07.890 rw=write 00:10:07.890 time_based=1 00:10:07.890 runtime=1 00:10:07.890 ioengine=libaio 00:10:07.890 direct=1 00:10:07.890 bs=4096 00:10:07.890 iodepth=1 00:10:07.890 norandommap=0 00:10:07.890 numjobs=1 00:10:07.890 00:10:07.890 verify_dump=1 00:10:07.890 verify_backlog=512 00:10:07.890 verify_state_save=0 00:10:07.890 do_verify=1 00:10:07.890 verify=crc32c-intel 00:10:07.890 [job0] 00:10:07.890 filename=/dev/nvme0n1 00:10:07.890 [job1] 00:10:07.890 filename=/dev/nvme0n2 00:10:07.890 [job2] 00:10:07.890 filename=/dev/nvme0n3 00:10:07.891 [job3] 00:10:07.891 filename=/dev/nvme0n4 00:10:07.891 Could not set queue depth (nvme0n1) 00:10:07.891 Could not set queue depth (nvme0n2) 00:10:07.891 Could not set queue depth (nvme0n3) 00:10:07.891 Could not set queue depth (nvme0n4) 00:10:07.891 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.891 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.891 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.891 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.891 fio-3.35 00:10:07.891 Starting 4 threads 00:10:08.826 00:10:08.826 job0: (groupid=0, jobs=1): err= 0: pid=63596: Sat Dec 7 04:25:11 2024 00:10:08.826 read: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec) 00:10:08.826 slat (nsec): min=10677, max=52318, avg=13380.26, stdev=3552.79 00:10:08.826 clat (usec): min=128, max=476, avg=163.42, stdev=15.76 00:10:08.826 lat (usec): min=140, max=491, avg=176.80, stdev=16.59 00:10:08.826 clat percentiles (usec): 00:10:08.826 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:08.826 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:08.826 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:10:08.826 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 217], 99.95th=[ 229], 00:10:08.826 | 99.99th=[ 478] 00:10:08.826 write: IOPS=3073, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:10:08.826 slat (usec): min=13, max=113, avg=20.52, stdev= 4.81 00:10:08.826 clat (usec): min=88, max=223, avg=125.16, stdev=13.36 00:10:08.826 lat (usec): min=106, max=336, avg=145.69, stdev=14.43 00:10:08.826 clat percentiles (usec): 00:10:08.826 | 1.00th=[ 99], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 115], 00:10:08.826 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 127], 00:10:08.826 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:10:08.826 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 221], 00:10:08.826 | 99.99th=[ 225] 00:10:08.826 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:08.826 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:08.826 lat (usec) : 100=0.67%, 250=99.32%, 500=0.02% 00:10:08.826 cpu : usr=2.40%, sys=8.00%, ctx=6145, majf=0, minf=5 00:10:08.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.826 issued rwts: total=3072,3073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.826 job1: (groupid=0, jobs=1): err= 0: pid=63597: Sat Dec 7 04:25:11 2024 00:10:08.826 read: IOPS=1840, BW=7361KiB/s (7537kB/s)(7368KiB/1001msec) 00:10:08.826 slat (nsec): min=11049, max=42469, avg=14257.33, stdev=3762.58 00:10:08.826 clat (usec): min=159, max=2717, avg=261.47, stdev=66.26 00:10:08.826 lat (usec): min=175, max=2733, avg=275.72, stdev=66.71 00:10:08.826 clat percentiles (usec): 00:10:08.826 | 1.00th=[ 188], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:10:08.826 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:10:08.826 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:10:08.826 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 807], 99.95th=[ 2704], 00:10:08.826 | 99.99th=[ 2704] 00:10:08.826 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:08.826 slat (usec): min=16, max=157, avg=24.68, stdev= 8.70 00:10:08.826 clat (usec): min=96, max=773, avg=212.36, stdev=49.88 00:10:08.826 lat (usec): min=120, max=795, avg=237.04, stdev=53.77 00:10:08.826 clat percentiles (usec): 00:10:08.826 | 1.00th=[ 111], 5.00th=[ 139], 10.00th=[ 174], 20.00th=[ 186], 00:10:08.826 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 212], 00:10:08.826 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 255], 95.00th=[ 330], 00:10:08.826 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 433], 99.95th=[ 562], 00:10:08.826 | 99.99th=[ 775] 00:10:08.826 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:08.826 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:08.826 lat (usec) : 100=0.05%, 250=63.98%, 500=35.78%, 750=0.10%, 1000=0.05% 00:10:08.826 lat (msec) : 4=0.03% 00:10:08.826 cpu : usr=1.80%, sys=5.70%, ctx=3890, majf=0, minf=17 00:10:08.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.826 issued rwts: total=1842,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.826 job2: (groupid=0, jobs=1): err= 0: pid=63599: Sat Dec 7 04:25:11 2024 00:10:08.826 read: IOPS=2670, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:10:08.826 slat (nsec): min=10478, max=41123, avg=13089.34, stdev=2822.40 00:10:08.826 clat (usec): min=139, max=1847, avg=176.94, stdev=36.96 00:10:08.826 lat (usec): min=151, max=1860, avg=190.03, stdev=37.17 00:10:08.826 clat percentiles (usec): 00:10:08.826 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:08.826 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:08.826 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:10:08.826 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 375], 99.95th=[ 523], 00:10:08.826 | 99.99th=[ 1844] 00:10:08.826 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:08.826 slat (usec): min=13, max=100, avg=20.19, stdev= 4.96 00:10:08.826 clat (usec): min=101, max=564, avg=137.42, stdev=16.54 00:10:08.826 lat (usec): min=118, max=586, avg=157.61, stdev=17.72 00:10:08.826 clat percentiles (usec): 00:10:08.826 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 126], 00:10:08.826 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:10:08.826 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:10:08.826 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 202], 99.95th=[ 260], 00:10:08.826 | 99.99th=[ 562] 00:10:08.826 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:08.827 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:08.827 lat (usec) : 250=99.88%, 500=0.07%, 750=0.03% 00:10:08.827 lat (msec) : 2=0.02% 00:10:08.827 cpu : usr=1.60%, sys=7.90%, ctx=5745, majf=0, minf=5 00:10:08.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.827 issued rwts: total=2673,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.827 job3: (groupid=0, jobs=1): err= 0: pid=63600: Sat Dec 7 04:25:11 2024 00:10:08.827 read: IOPS=1920, BW=7680KiB/s (7865kB/s)(7688KiB/1001msec) 00:10:08.827 slat (nsec): min=11635, max=62374, avg=15846.73, stdev=4174.89 00:10:08.827 clat (usec): min=160, max=1970, avg=262.06, stdev=57.14 00:10:08.827 lat (usec): min=176, max=1992, avg=277.90, stdev=57.73 00:10:08.827 clat percentiles (usec): 00:10:08.827 | 1.00th=[ 198], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:10:08.827 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:10:08.827 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:10:08.827 | 99.00th=[ 490], 99.50th=[ 519], 99.90th=[ 1029], 99.95th=[ 1975], 00:10:08.827 | 99.99th=[ 1975] 00:10:08.827 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:08.827 slat (usec): min=16, max=105, avg=24.01, stdev= 6.17 00:10:08.827 clat (usec): min=101, max=4267, avg=199.97, stdev=118.09 00:10:08.827 lat (usec): min=119, max=4286, avg=223.98, stdev=118.25 00:10:08.827 clat percentiles (usec): 00:10:08.827 | 1.00th=[ 113], 5.00th=[ 129], 10.00th=[ 159], 20.00th=[ 180], 00:10:08.827 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:10:08.827 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 243], 00:10:08.827 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 1205], 99.95th=[ 3195], 00:10:08.827 | 99.99th=[ 4293] 00:10:08.827 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:08.827 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:08.827 lat (usec) : 250=69.72%, 500=29.82%, 750=0.33% 00:10:08.827 lat (msec) : 2=0.08%, 4=0.03%, 10=0.03% 00:10:08.827 cpu : usr=1.60%, sys=6.30%, ctx=3977, majf=0, minf=9 00:10:08.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.827 issued rwts: total=1922,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.827 00:10:08.827 Run status group 0 (all jobs): 00:10:08.827 READ: bw=37.1MiB/s (38.9MB/s), 7361KiB/s-12.0MiB/s (7537kB/s-12.6MB/s), io=37.1MiB (38.9MB), run=1000-1001msec 00:10:08.827 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1000-1001msec 00:10:08.827 00:10:08.827 Disk stats (read/write): 00:10:08.827 nvme0n1: ios=2610/2675, merge=0/0, ticks=468/356, in_queue=824, util=87.78% 00:10:08.827 nvme0n2: ios=1541/1808, merge=0/0, ticks=406/398, in_queue=804, util=87.26% 00:10:08.827 nvme0n3: ios=2323/2560, merge=0/0, ticks=427/370, in_queue=797, util=89.17% 00:10:08.827 nvme0n4: ios=1536/1877, merge=0/0, ticks=399/394, in_queue=793, util=89.41% 00:10:08.827 04:25:11 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:08.827 [global] 00:10:08.827 thread=1 00:10:08.827 invalidate=1 00:10:08.827 rw=randwrite 00:10:08.827 time_based=1 00:10:08.827 runtime=1 00:10:08.827 ioengine=libaio 00:10:08.827 direct=1 00:10:08.827 bs=4096 00:10:08.827 iodepth=1 00:10:08.827 norandommap=0 00:10:08.827 numjobs=1 00:10:08.827 00:10:08.827 verify_dump=1 00:10:08.827 verify_backlog=512 00:10:08.827 verify_state_save=0 00:10:08.827 do_verify=1 00:10:08.827 verify=crc32c-intel 00:10:08.827 [job0] 00:10:08.827 filename=/dev/nvme0n1 00:10:08.827 [job1] 00:10:08.827 filename=/dev/nvme0n2 00:10:08.827 [job2] 00:10:08.827 filename=/dev/nvme0n3 00:10:08.827 [job3] 00:10:08.827 filename=/dev/nvme0n4 00:10:08.827 Could not set queue depth (nvme0n1) 00:10:08.827 Could not set queue depth (nvme0n2) 00:10:08.827 Could not set queue depth (nvme0n3) 00:10:08.827 Could not set queue depth (nvme0n4) 00:10:09.086 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.086 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.086 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.086 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.086 fio-3.35 00:10:09.086 Starting 4 threads 00:10:10.465 00:10:10.465 job0: (groupid=0, jobs=1): err= 0: pid=63659: Sat Dec 7 04:25:13 2024 00:10:10.465 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:10.465 slat (nsec): min=10332, max=51236, avg=14021.84, stdev=5007.71 00:10:10.465 clat (usec): min=124, max=1582, avg=163.09, stdev=32.48 00:10:10.465 lat (usec): min=136, max=1609, avg=177.11, stdev=33.65 00:10:10.465 clat percentiles (usec): 00:10:10.465 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:10:10.465 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:10:10.465 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 194], 00:10:10.465 | 99.00th=[ 221], 99.50th=[ 245], 99.90th=[ 347], 99.95th=[ 515], 00:10:10.465 | 99.99th=[ 1582] 00:10:10.465 write: IOPS=3074, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:10.465 slat (nsec): min=12935, max=92326, avg=19734.52, stdev=6016.36 00:10:10.465 clat (usec): min=90, max=211, avg=125.17, stdev=14.76 00:10:10.465 lat (usec): min=107, max=283, avg=144.90, stdev=16.27 00:10:10.465 clat percentiles (usec): 00:10:10.465 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 114], 00:10:10.465 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:10:10.465 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 153], 00:10:10.465 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 200], 00:10:10.465 | 99.99th=[ 212] 00:10:10.465 bw ( KiB/s): min=12424, max=12424, per=30.15%, avg=12424.00, stdev= 0.00, samples=1 00:10:10.465 iops : min= 3106, max= 3106, avg=3106.00, stdev= 0.00, samples=1 00:10:10.465 lat (usec) : 100=1.17%, 250=98.60%, 500=0.20%, 750=0.02% 00:10:10.465 lat (msec) : 2=0.02% 00:10:10.465 cpu : usr=1.80%, sys=8.60%, ctx=6151, majf=0, minf=9 00:10:10.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.465 issued rwts: total=3072,3078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.465 job1: (groupid=0, jobs=1): err= 0: pid=63660: Sat Dec 7 04:25:13 2024 00:10:10.465 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:10.465 slat (nsec): min=10097, max=56111, avg=12284.53, stdev=2975.80 00:10:10.465 clat (usec): min=117, max=1603, avg=162.46, stdev=30.63 00:10:10.465 lat (usec): min=134, max=1617, avg=174.75, stdev=30.86 00:10:10.465 clat percentiles (usec): 00:10:10.465 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:10.466 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:10:10.466 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 190], 00:10:10.466 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 243], 99.95th=[ 441], 00:10:10.466 | 99.99th=[ 1598] 00:10:10.466 write: IOPS=3136, BW=12.3MiB/s (12.8MB/s)(12.3MiB/1001msec); 0 zone resets 00:10:10.466 slat (nsec): min=13629, max=96410, avg=19947.12, stdev=5410.56 00:10:10.466 clat (usec): min=93, max=332, avg=124.44, stdev=14.26 00:10:10.466 lat (usec): min=111, max=352, avg=144.38, stdev=15.36 00:10:10.466 clat percentiles (usec): 00:10:10.466 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 114], 00:10:10.466 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:10:10.466 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 151], 00:10:10.466 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 196], 99.95th=[ 251], 00:10:10.466 | 99.99th=[ 334] 00:10:10.466 bw ( KiB/s): min=12408, max=12408, per=30.11%, avg=12408.00, stdev= 0.00, samples=1 00:10:10.466 iops : min= 3102, max= 3102, avg=3102.00, stdev= 0.00, samples=1 00:10:10.466 lat (usec) : 100=0.34%, 250=99.58%, 500=0.06% 00:10:10.466 lat (msec) : 2=0.02% 00:10:10.466 cpu : usr=2.40%, sys=7.90%, ctx=6213, majf=0, minf=9 00:10:10.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.466 issued rwts: total=3072,3140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.466 job2: (groupid=0, jobs=1): err= 0: pid=63661: Sat Dec 7 04:25:13 2024 00:10:10.466 read: IOPS=1631, BW=6525KiB/s (6682kB/s)(6532KiB/1001msec) 00:10:10.466 slat (nsec): min=10499, max=64259, avg=15637.17, stdev=4868.83 00:10:10.466 clat (usec): min=155, max=834, avg=292.72, stdev=62.79 00:10:10.466 lat (usec): min=168, max=858, avg=308.35, stdev=64.57 00:10:10.466 clat percentiles (usec): 00:10:10.466 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 258], 00:10:10.466 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:10:10.466 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 469], 00:10:10.466 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 570], 99.95th=[ 832], 00:10:10.466 | 99.99th=[ 832] 00:10:10.466 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:10.466 slat (usec): min=15, max=103, avg=24.96, stdev= 6.85 00:10:10.466 clat (usec): min=102, max=691, avg=214.22, stdev=39.24 00:10:10.466 lat (usec): min=121, max=712, avg=239.18, stdev=40.08 00:10:10.466 clat percentiles (usec): 00:10:10.466 | 1.00th=[ 120], 5.00th=[ 137], 10.00th=[ 153], 20.00th=[ 192], 00:10:10.466 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:10:10.466 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 262], 00:10:10.466 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 355], 99.95th=[ 668], 00:10:10.466 | 99.99th=[ 693] 00:10:10.466 bw ( KiB/s): min= 8192, max= 8192, per=19.88%, avg=8192.00, stdev= 0.00, samples=2 00:10:10.466 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:10.466 lat (usec) : 250=53.30%, 500=45.31%, 750=1.36%, 1000=0.03% 00:10:10.466 cpu : usr=2.20%, sys=5.40%, ctx=3688, majf=0, minf=9 00:10:10.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.466 issued rwts: total=1633,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.466 job3: (groupid=0, jobs=1): err= 0: pid=63662: Sat Dec 7 04:25:13 2024 00:10:10.466 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:10.466 slat (nsec): min=13543, max=60280, avg=17308.89, stdev=4824.57 00:10:10.466 clat (usec): min=173, max=880, avg=286.79, stdev=49.28 00:10:10.466 lat (usec): min=188, max=897, avg=304.09, stdev=51.83 00:10:10.466 clat percentiles (usec): 00:10:10.466 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 258], 00:10:10.466 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:10:10.466 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 412], 00:10:10.466 | 99.00th=[ 465], 99.50th=[ 502], 99.90th=[ 586], 99.95th=[ 881], 00:10:10.466 | 99.99th=[ 881] 00:10:10.466 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8184KiB/1001msec); 0 zone resets 00:10:10.466 slat (nsec): min=18995, max=89385, avg=28282.26, stdev=7750.86 00:10:10.466 clat (usec): min=106, max=464, avg=228.46, stdev=61.57 00:10:10.466 lat (usec): min=128, max=537, avg=256.74, stdev=65.67 00:10:10.466 clat percentiles (usec): 00:10:10.466 | 1.00th=[ 117], 5.00th=[ 126], 10.00th=[ 145], 20.00th=[ 196], 00:10:10.466 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:10:10.466 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 289], 95.00th=[ 375], 00:10:10.466 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 461], 99.95th=[ 465], 00:10:10.466 | 99.99th=[ 465] 00:10:10.466 bw ( KiB/s): min= 8192, max= 8192, per=19.88%, avg=8192.00, stdev= 0.00, samples=1 00:10:10.466 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:10.466 lat (usec) : 250=49.55%, 500=50.22%, 750=0.20%, 1000=0.03% 00:10:10.466 cpu : usr=1.70%, sys=6.60%, ctx=3582, majf=0, minf=17 00:10:10.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.466 issued rwts: total=1536,2046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.466 00:10:10.466 Run status group 0 (all jobs): 00:10:10.466 READ: bw=36.3MiB/s (38.1MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=36.4MiB (38.1MB), run=1001-1001msec 00:10:10.466 WRITE: bw=40.2MiB/s (42.2MB/s), 8176KiB/s-12.3MiB/s (8372kB/s-12.8MB/s), io=40.3MiB (42.2MB), run=1001-1001msec 00:10:10.466 00:10:10.466 Disk stats (read/write): 00:10:10.466 nvme0n1: ios=2610/2729, merge=0/0, ticks=438/347, in_queue=785, util=87.27% 00:10:10.466 nvme0n2: ios=2609/2797, merge=0/0, ticks=432/370, in_queue=802, util=87.79% 00:10:10.466 nvme0n3: ios=1536/1622, merge=0/0, ticks=451/360, in_queue=811, util=89.21% 00:10:10.466 nvme0n4: ios=1508/1536, merge=0/0, ticks=442/370, in_queue=812, util=89.68% 00:10:10.466 04:25:13 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:10.466 [global] 00:10:10.466 thread=1 00:10:10.466 invalidate=1 00:10:10.466 rw=write 00:10:10.466 time_based=1 00:10:10.466 runtime=1 00:10:10.466 ioengine=libaio 00:10:10.466 direct=1 00:10:10.466 bs=4096 00:10:10.466 iodepth=128 00:10:10.466 norandommap=0 00:10:10.466 numjobs=1 00:10:10.466 00:10:10.466 verify_dump=1 00:10:10.466 verify_backlog=512 00:10:10.466 verify_state_save=0 00:10:10.466 do_verify=1 00:10:10.466 verify=crc32c-intel 00:10:10.466 [job0] 00:10:10.466 filename=/dev/nvme0n1 00:10:10.466 [job1] 00:10:10.466 filename=/dev/nvme0n2 00:10:10.466 [job2] 00:10:10.466 filename=/dev/nvme0n3 00:10:10.466 [job3] 00:10:10.466 filename=/dev/nvme0n4 00:10:10.466 Could not set queue depth (nvme0n1) 00:10:10.466 Could not set queue depth (nvme0n2) 00:10:10.466 Could not set queue depth (nvme0n3) 00:10:10.466 Could not set queue depth (nvme0n4) 00:10:10.466 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.466 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.466 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.466 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.466 fio-3.35 00:10:10.466 Starting 4 threads 00:10:11.472 00:10:11.472 job0: (groupid=0, jobs=1): err= 0: pid=63722: Sat Dec 7 04:25:14 2024 00:10:11.472 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:10:11.472 slat (usec): min=5, max=3894, avg=89.59, stdev=422.86 00:10:11.472 clat (usec): min=8516, max=14097, avg=12133.38, stdev=722.53 00:10:11.472 lat (usec): min=10934, max=14107, avg=12222.97, stdev=586.97 00:10:11.472 clat percentiles (usec): 00:10:11.472 | 1.00th=[ 9503], 5.00th=[11207], 10.00th=[11469], 20.00th=[11600], 00:10:11.472 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:10:11.472 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[13173], 00:10:11.472 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14091], 99.95th=[14091], 00:10:11.472 | 99.99th=[14091] 00:10:11.472 write: IOPS=5270, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1002msec); 0 zone resets 00:10:11.472 slat (usec): min=8, max=2828, avg=94.87, stdev=405.34 00:10:11.472 clat (usec): min=212, max=13603, avg=12190.46, stdev=1082.27 00:10:11.472 lat (usec): min=2666, max=13886, avg=12285.33, stdev=1005.95 00:10:11.472 clat percentiles (usec): 00:10:11.472 | 1.00th=[ 6456], 5.00th=[10683], 10.00th=[11600], 20.00th=[11863], 00:10:11.472 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:10:11.472 | 70.00th=[12649], 80.00th=[12649], 90.00th=[12911], 95.00th=[13042], 00:10:11.472 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:10:11.472 | 99.99th=[13566] 00:10:11.472 bw ( KiB/s): min=20480, max=20744, per=25.87%, avg=20612.00, stdev=186.68, samples=2 00:10:11.472 iops : min= 5120, max= 5186, avg=5153.00, stdev=46.67, samples=2 00:10:11.472 lat (usec) : 250=0.01% 00:10:11.472 lat (msec) : 4=0.31%, 10=2.22%, 20=97.46% 00:10:11.473 cpu : usr=4.80%, sys=13.79%, ctx=328, majf=0, minf=10 00:10:11.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.473 issued rwts: total=5120,5281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.473 job1: (groupid=0, jobs=1): err= 0: pid=63723: Sat Dec 7 04:25:14 2024 00:10:11.473 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:11.473 slat (usec): min=7, max=5540, avg=91.08, stdev=496.22 00:10:11.473 clat (usec): min=6841, max=18920, avg=11962.29, stdev=1278.09 00:10:11.473 lat (usec): min=6862, max=21204, avg=12053.37, stdev=1335.38 00:10:11.473 clat percentiles (usec): 00:10:11.473 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[10814], 20.00th=[11207], 00:10:11.473 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:10:11.473 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13304], 95.00th=[14353], 00:10:11.473 | 99.00th=[16450], 99.50th=[16909], 99.90th=[18744], 99.95th=[19006], 00:10:11.473 | 99.99th=[19006] 00:10:11.473 write: IOPS=5465, BW=21.3MiB/s (22.4MB/s)(21.4MiB/1003msec); 0 zone resets 00:10:11.473 slat (usec): min=10, max=5727, avg=90.63, stdev=516.23 00:10:11.473 clat (usec): min=176, max=18855, avg=11975.14, stdev=1440.85 00:10:11.473 lat (usec): min=4920, max=18885, avg=12065.77, stdev=1517.61 00:10:11.473 clat percentiles (usec): 00:10:11.473 | 1.00th=[ 6259], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11338], 00:10:11.473 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:10:11.473 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13304], 95.00th=[13829], 00:10:11.473 | 99.00th=[16581], 99.50th=[17433], 99.90th=[18744], 99.95th=[18744], 00:10:11.473 | 99.99th=[18744] 00:10:11.473 bw ( KiB/s): min=20680, max=22152, per=26.88%, avg=21416.00, stdev=1040.86, samples=2 00:10:11.473 iops : min= 5170, max= 5538, avg=5354.00, stdev=260.22, samples=2 00:10:11.473 lat (usec) : 250=0.01% 00:10:11.473 lat (msec) : 10=4.85%, 20=95.14% 00:10:11.473 cpu : usr=3.69%, sys=14.37%, ctx=328, majf=0, minf=13 00:10:11.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.473 issued rwts: total=5120,5482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.473 job2: (groupid=0, jobs=1): err= 0: pid=63724: Sat Dec 7 04:25:14 2024 00:10:11.473 read: IOPS=4563, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec) 00:10:11.473 slat (usec): min=8, max=3236, avg=102.56, stdev=482.34 00:10:11.473 clat (usec): min=280, max=15765, avg=13564.01, stdev=1261.31 00:10:11.473 lat (usec): min=3298, max=15787, avg=13666.58, stdev=1164.38 00:10:11.473 clat percentiles (usec): 00:10:11.473 | 1.00th=[ 7308], 5.00th=[11994], 10.00th=[12911], 20.00th=[13304], 00:10:11.473 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:10:11.473 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14484], 95.00th=[14746], 00:10:11.473 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15795], 99.95th=[15795], 00:10:11.473 | 99.99th=[15795] 00:10:11.473 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:11.473 slat (usec): min=9, max=3387, avg=106.91, stdev=452.49 00:10:11.473 clat (usec): min=10245, max=15595, avg=13974.41, stdev=738.06 00:10:11.473 lat (usec): min=11958, max=15630, avg=14081.31, stdev=588.98 00:10:11.473 clat percentiles (usec): 00:10:11.473 | 1.00th=[11207], 5.00th=[12911], 10.00th=[13173], 20.00th=[13566], 00:10:11.473 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:10:11.473 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[15139], 00:10:11.473 | 99.00th=[15401], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:10:11.473 | 99.99th=[15533] 00:10:11.473 bw ( KiB/s): min=17568, max=19296, per=23.13%, avg=18432.00, stdev=1221.88, samples=2 00:10:11.473 iops : min= 4392, max= 4824, avg=4608.00, stdev=305.47, samples=2 00:10:11.473 lat (usec) : 500=0.01% 00:10:11.473 lat (msec) : 4=0.35%, 10=0.35%, 20=99.29% 00:10:11.473 cpu : usr=4.59%, sys=13.77%, ctx=289, majf=0, minf=13 00:10:11.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.473 issued rwts: total=4577,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.473 job3: (groupid=0, jobs=1): err= 0: pid=63725: Sat Dec 7 04:25:14 2024 00:10:11.473 read: IOPS=4435, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1003msec) 00:10:11.473 slat (usec): min=5, max=3393, avg=104.67, stdev=493.89 00:10:11.473 clat (usec): min=173, max=17060, avg=13877.99, stdev=1475.72 00:10:11.473 lat (usec): min=2683, max=17072, avg=13982.66, stdev=1391.11 00:10:11.473 clat percentiles (usec): 00:10:11.473 | 1.00th=[ 6259], 5.00th=[12256], 10.00th=[13304], 20.00th=[13566], 00:10:11.473 | 30.00th=[13698], 40.00th=[13960], 50.00th=[13960], 60.00th=[14091], 00:10:11.473 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[15533], 00:10:11.473 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:10:11.473 | 99.99th=[17171] 00:10:11.473 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:11.473 slat (usec): min=9, max=4418, avg=108.01, stdev=467.83 00:10:11.473 clat (usec): min=10073, max=15660, avg=14066.45, stdev=785.81 00:10:11.473 lat (usec): min=11973, max=16644, avg=14174.47, stdev=640.65 00:10:11.473 clat percentiles (usec): 00:10:11.473 | 1.00th=[11207], 5.00th=[12911], 10.00th=[13304], 20.00th=[13566], 00:10:11.473 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:10:11.473 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15270], 00:10:11.473 | 99.00th=[15533], 99.50th=[15533], 99.90th=[15533], 99.95th=[15664], 00:10:11.473 | 99.99th=[15664] 00:10:11.473 bw ( KiB/s): min=18168, max=18696, per=23.13%, avg=18432.00, stdev=373.35, samples=2 00:10:11.473 iops : min= 4542, max= 4674, avg=4608.00, stdev=93.34, samples=2 00:10:11.473 lat (usec) : 250=0.01% 00:10:11.473 lat (msec) : 4=0.35%, 10=0.70%, 20=98.94% 00:10:11.473 cpu : usr=5.69%, sys=12.08%, ctx=284, majf=0, minf=11 00:10:11.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.473 issued rwts: total=4449,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.473 00:10:11.473 Run status group 0 (all jobs): 00:10:11.473 READ: bw=75.0MiB/s (78.7MB/s), 17.3MiB/s-20.0MiB/s (18.2MB/s-20.9MB/s), io=75.3MiB (78.9MB), run=1002-1003msec 00:10:11.473 WRITE: bw=77.8MiB/s (81.6MB/s), 17.9MiB/s-21.3MiB/s (18.8MB/s-22.4MB/s), io=78.0MiB (81.8MB), run=1002-1003msec 00:10:11.473 00:10:11.473 Disk stats (read/write): 00:10:11.473 nvme0n1: ios=4338/4608, merge=0/0, ticks=11023/12208, in_queue=23231, util=87.16% 00:10:11.473 nvme0n2: ios=4423/4608, merge=0/0, ticks=24738/23972, in_queue=48710, util=87.77% 00:10:11.473 nvme0n3: ios=3712/4096, merge=0/0, ticks=11250/12274, in_queue=23524, util=89.06% 00:10:11.473 nvme0n4: ios=3584/4096, merge=0/0, ticks=11046/12506, in_queue=23552, util=89.62% 00:10:11.473 04:25:14 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:11.473 [global] 00:10:11.473 thread=1 00:10:11.473 invalidate=1 00:10:11.473 rw=randwrite 00:10:11.473 time_based=1 00:10:11.473 runtime=1 00:10:11.473 ioengine=libaio 00:10:11.473 direct=1 00:10:11.473 bs=4096 00:10:11.473 iodepth=128 00:10:11.473 norandommap=0 00:10:11.473 numjobs=1 00:10:11.473 00:10:11.473 verify_dump=1 00:10:11.473 verify_backlog=512 00:10:11.473 verify_state_save=0 00:10:11.473 do_verify=1 00:10:11.473 verify=crc32c-intel 00:10:11.473 [job0] 00:10:11.473 filename=/dev/nvme0n1 00:10:11.473 [job1] 00:10:11.473 filename=/dev/nvme0n2 00:10:11.473 [job2] 00:10:11.473 filename=/dev/nvme0n3 00:10:11.473 [job3] 00:10:11.473 filename=/dev/nvme0n4 00:10:11.732 Could not set queue depth (nvme0n1) 00:10:11.732 Could not set queue depth (nvme0n2) 00:10:11.732 Could not set queue depth (nvme0n3) 00:10:11.732 Could not set queue depth (nvme0n4) 00:10:11.732 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.732 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.732 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.732 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.732 fio-3.35 00:10:11.732 Starting 4 threads 00:10:13.117 00:10:13.117 job0: (groupid=0, jobs=1): err= 0: pid=63778: Sat Dec 7 04:25:16 2024 00:10:13.117 read: IOPS=2065, BW=8262KiB/s (8460kB/s)(8320KiB/1007msec) 00:10:13.117 slat (usec): min=3, max=12795, avg=202.43, stdev=1092.85 00:10:13.117 clat (usec): min=841, max=65149, avg=25174.89, stdev=8089.51 00:10:13.117 lat (usec): min=6602, max=66568, avg=25377.32, stdev=8159.10 00:10:13.117 clat percentiles (usec): 00:10:13.117 | 1.00th=[ 6849], 5.00th=[17171], 10.00th=[19530], 20.00th=[22152], 00:10:13.117 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:10:13.117 | 70.00th=[25035], 80.00th=[25560], 90.00th=[32637], 95.00th=[45351], 00:10:13.117 | 99.00th=[59507], 99.50th=[62129], 99.90th=[65274], 99.95th=[65274], 00:10:13.117 | 99.99th=[65274] 00:10:13.117 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:10:13.117 slat (usec): min=10, max=16932, avg=220.05, stdev=1215.28 00:10:13.117 clat (usec): min=6959, max=89397, avg=29283.47, stdev=18358.23 00:10:13.117 lat (usec): min=6975, max=89434, avg=29503.52, stdev=18485.94 00:10:13.117 clat percentiles (usec): 00:10:13.117 | 1.00th=[ 7504], 5.00th=[11600], 10.00th=[13173], 20.00th=[19006], 00:10:13.117 | 30.00th=[21365], 40.00th=[22414], 50.00th=[22938], 60.00th=[23462], 00:10:13.117 | 70.00th=[23987], 80.00th=[37487], 90.00th=[61604], 95.00th=[73925], 00:10:13.117 | 99.00th=[87557], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:10:13.117 | 99.99th=[89654] 00:10:13.117 bw ( KiB/s): min= 7424, max=12288, per=14.71%, avg=9856.00, stdev=3439.37, samples=2 00:10:13.117 iops : min= 1856, max= 3072, avg=2464.00, stdev=859.84, samples=2 00:10:13.117 lat (usec) : 1000=0.02% 00:10:13.117 lat (msec) : 10=1.36%, 20=17.37%, 50=72.50%, 100=8.75% 00:10:13.117 cpu : usr=2.88%, sys=6.26%, ctx=185, majf=0, minf=9 00:10:13.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:10:13.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.117 issued rwts: total=2080,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.117 job1: (groupid=0, jobs=1): err= 0: pid=63779: Sat Dec 7 04:25:16 2024 00:10:13.117 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:13.117 slat (usec): min=6, max=8525, avg=83.78, stdev=409.04 00:10:13.117 clat (usec): min=6976, max=19846, avg=11001.52, stdev=1203.72 00:10:13.117 lat (usec): min=6990, max=19879, avg=11085.30, stdev=1236.36 00:10:13.117 clat percentiles (usec): 00:10:13.117 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:10:13.117 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:13.117 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[13304], 00:10:13.117 | 99.00th=[15926], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:10:13.118 | 99.99th=[19792] 00:10:13.118 write: IOPS=6046, BW=23.6MiB/s (24.8MB/s)(23.7MiB/1002msec); 0 zone resets 00:10:13.118 slat (usec): min=9, max=4434, avg=79.58, stdev=426.58 00:10:13.118 clat (usec): min=1320, max=16226, avg=10682.14, stdev=1258.24 00:10:13.118 lat (usec): min=1368, max=16250, avg=10761.72, stdev=1319.61 00:10:13.118 clat percentiles (usec): 00:10:13.118 | 1.00th=[ 6783], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10028], 00:10:13.118 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:10:13.118 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[12256], 00:10:13.118 | 99.00th=[14353], 99.50th=[15008], 99.90th=[16188], 99.95th=[16188], 00:10:13.118 | 99.99th=[16188] 00:10:13.118 bw ( KiB/s): min=22880, max=24625, per=35.45%, avg=23752.50, stdev=1233.90, samples=2 00:10:13.118 iops : min= 5720, max= 6156, avg=5938.00, stdev=308.30, samples=2 00:10:13.118 lat (msec) : 2=0.14%, 4=0.10%, 10=14.09%, 20=85.67% 00:10:13.118 cpu : usr=4.70%, sys=16.58%, ctx=388, majf=0, minf=10 00:10:13.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:13.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.118 issued rwts: total=5632,6059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.118 job2: (groupid=0, jobs=1): err= 0: pid=63780: Sat Dec 7 04:25:16 2024 00:10:13.118 read: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1003msec) 00:10:13.118 slat (usec): min=4, max=7754, avg=98.17, stdev=500.28 00:10:13.118 clat (usec): min=2684, max=19761, avg=12637.01, stdev=1609.45 00:10:13.118 lat (usec): min=2726, max=23422, avg=12735.18, stdev=1648.01 00:10:13.118 clat percentiles (usec): 00:10:13.118 | 1.00th=[ 5932], 5.00th=[10421], 10.00th=[11207], 20.00th=[11994], 00:10:13.118 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:10:13.118 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14091], 95.00th=[15008], 00:10:13.118 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:10:13.118 | 99.99th=[19792] 00:10:13.118 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:13.118 slat (usec): min=10, max=5690, avg=91.89, stdev=512.30 00:10:13.118 clat (usec): min=7029, max=19101, avg=12465.61, stdev=1255.27 00:10:13.118 lat (usec): min=7059, max=19155, avg=12557.50, stdev=1347.36 00:10:13.118 clat percentiles (usec): 00:10:13.118 | 1.00th=[ 8979], 5.00th=[10683], 10.00th=[11469], 20.00th=[11731], 00:10:13.118 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:10:13.118 | 70.00th=[12780], 80.00th=[13304], 90.00th=[13829], 95.00th=[14353], 00:10:13.118 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:10:13.118 | 99.99th=[19006] 00:10:13.118 bw ( KiB/s): min=20480, max=20521, per=30.60%, avg=20500.50, stdev=28.99, samples=2 00:10:13.118 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:13.118 lat (msec) : 4=0.36%, 10=3.33%, 20=96.32% 00:10:13.118 cpu : usr=4.09%, sys=14.97%, ctx=328, majf=0, minf=10 00:10:13.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:13.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.118 issued rwts: total=4982,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.118 job3: (groupid=0, jobs=1): err= 0: pid=63781: Sat Dec 7 04:25:16 2024 00:10:13.118 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:10:13.118 slat (usec): min=7, max=17061, avg=162.55, stdev=1012.15 00:10:13.118 clat (usec): min=13458, max=45388, avg=22831.84, stdev=5714.41 00:10:13.118 lat (usec): min=13471, max=45413, avg=22994.39, stdev=5762.45 00:10:13.118 clat percentiles (usec): 00:10:13.118 | 1.00th=[14615], 5.00th=[16909], 10.00th=[17171], 20.00th=[17695], 00:10:13.118 | 30.00th=[18482], 40.00th=[21890], 50.00th=[22938], 60.00th=[23462], 00:10:13.118 | 70.00th=[23987], 80.00th=[25297], 90.00th=[28967], 95.00th=[32637], 00:10:13.118 | 99.00th=[44303], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:10:13.118 | 99.99th=[45351] 00:10:13.118 write: IOPS=3109, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1006msec); 0 zone resets 00:10:13.118 slat (usec): min=6, max=19611, avg=153.37, stdev=1042.56 00:10:13.118 clat (usec): min=1063, max=33652, avg=18307.88, stdev=4826.31 00:10:13.118 lat (usec): min=6859, max=33682, avg=18461.25, stdev=4773.40 00:10:13.118 clat percentiles (usec): 00:10:13.118 | 1.00th=[ 7767], 5.00th=[10814], 10.00th=[12911], 20.00th=[13960], 00:10:13.118 | 30.00th=[15139], 40.00th=[15795], 50.00th=[17957], 60.00th=[21365], 00:10:13.118 | 70.00th=[22152], 80.00th=[22938], 90.00th=[23200], 95.00th=[23462], 00:10:13.118 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:10:13.118 | 99.99th=[33817] 00:10:13.118 bw ( KiB/s): min=12288, max=12312, per=18.36%, avg=12300.00, stdev=16.97, samples=2 00:10:13.118 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:13.118 lat (msec) : 2=0.02%, 10=1.18%, 20=45.03%, 50=53.77% 00:10:13.118 cpu : usr=3.18%, sys=8.06%, ctx=153, majf=0, minf=15 00:10:13.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:13.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.118 issued rwts: total=3072,3128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.118 00:10:13.118 Run status group 0 (all jobs): 00:10:13.118 READ: bw=61.2MiB/s (64.1MB/s), 8262KiB/s-22.0MiB/s (8460kB/s-23.0MB/s), io=61.6MiB (64.6MB), run=1002-1007msec 00:10:13.118 WRITE: bw=65.4MiB/s (68.6MB/s), 9.93MiB/s-23.6MiB/s (10.4MB/s-24.8MB/s), io=65.9MiB (69.1MB), run=1002-1007msec 00:10:13.118 00:10:13.118 Disk stats (read/write): 00:10:13.118 nvme0n1: ios=1649/2048, merge=0/0, ticks=19660/31206, in_queue=50866, util=86.97% 00:10:13.118 nvme0n2: ios=4834/5120, merge=0/0, ticks=24616/22753, in_queue=47369, util=87.91% 00:10:13.118 nvme0n3: ios=4096/4438, merge=0/0, ticks=24680/23558, in_queue=48238, util=89.09% 00:10:13.118 nvme0n4: ios=2560/2741, merge=0/0, ticks=53598/48723, in_queue=102321, util=89.55% 00:10:13.118 04:25:16 -- target/fio.sh@55 -- # sync 00:10:13.118 04:25:16 -- target/fio.sh@59 -- # fio_pid=63794 00:10:13.118 04:25:16 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:13.118 04:25:16 -- target/fio.sh@61 -- # sleep 3 00:10:13.118 [global] 00:10:13.118 thread=1 00:10:13.118 invalidate=1 00:10:13.118 rw=read 00:10:13.118 time_based=1 00:10:13.118 runtime=10 00:10:13.118 ioengine=libaio 00:10:13.118 direct=1 00:10:13.118 bs=4096 00:10:13.118 iodepth=1 00:10:13.118 norandommap=1 00:10:13.118 numjobs=1 00:10:13.118 00:10:13.118 [job0] 00:10:13.118 filename=/dev/nvme0n1 00:10:13.118 [job1] 00:10:13.118 filename=/dev/nvme0n2 00:10:13.118 [job2] 00:10:13.118 filename=/dev/nvme0n3 00:10:13.118 [job3] 00:10:13.118 filename=/dev/nvme0n4 00:10:13.118 Could not set queue depth (nvme0n1) 00:10:13.118 Could not set queue depth (nvme0n2) 00:10:13.118 Could not set queue depth (nvme0n3) 00:10:13.118 Could not set queue depth (nvme0n4) 00:10:13.118 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.118 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.118 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.118 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.118 fio-3.35 00:10:13.118 Starting 4 threads 00:10:16.402 04:25:19 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:16.402 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40525824, buflen=4096 00:10:16.402 fio: pid=63843, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.402 04:25:19 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:16.402 fio: pid=63842, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.402 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44879872, buflen=4096 00:10:16.402 04:25:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.402 04:25:19 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:16.660 fio: pid=63840, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.660 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12386304, buflen=4096 00:10:16.660 04:25:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.660 04:25:19 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:16.918 fio: pid=63841, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.918 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=17797120, buflen=4096 00:10:16.918 04:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.918 04:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:16.918 00:10:16.918 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63840: Sat Dec 7 04:25:20 2024 00:10:16.918 read: IOPS=5725, BW=22.4MiB/s (23.4MB/s)(75.8MiB/3390msec) 00:10:16.918 slat (usec): min=10, max=14874, avg=14.55, stdev=161.98 00:10:16.918 clat (usec): min=3, max=4832, avg=158.94, stdev=42.69 00:10:16.918 lat (usec): min=134, max=15031, avg=173.48, stdev=167.77 00:10:16.918 clat percentiles (usec): 00:10:16.918 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:16.918 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:10:16.918 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:10:16.918 | 99.00th=[ 204], 99.50th=[ 217], 99.90th=[ 343], 99.95th=[ 429], 00:10:16.918 | 99.99th=[ 2114] 00:10:16.918 bw ( KiB/s): min=22320, max=23520, per=34.49%, avg=23094.67, stdev=589.37, samples=6 00:10:16.918 iops : min= 5580, max= 5880, avg=5773.67, stdev=147.34, samples=6 00:10:16.918 lat (usec) : 4=0.01%, 50=0.01%, 250=99.67%, 500=0.27%, 750=0.02% 00:10:16.918 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:10:16.918 cpu : usr=1.50%, sys=6.40%, ctx=19419, majf=0, minf=1 00:10:16.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 issued rwts: total=19409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.918 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63841: Sat Dec 7 04:25:20 2024 00:10:16.918 read: IOPS=5690, BW=22.2MiB/s (23.3MB/s)(81.0MiB/3643msec) 00:10:16.918 slat (usec): min=10, max=16704, avg=16.31, stdev=208.49 00:10:16.918 clat (usec): min=117, max=7283, avg=158.38, stdev=62.63 00:10:16.918 lat (usec): min=127, max=16951, avg=174.68, stdev=218.48 00:10:16.918 clat percentiles (usec): 00:10:16.918 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:10:16.918 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:10:16.918 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:10:16.918 | 99.00th=[ 198], 99.50th=[ 208], 99.90th=[ 441], 99.95th=[ 816], 00:10:16.918 | 99.99th=[ 1926] 00:10:16.918 bw ( KiB/s): min=21228, max=23664, per=34.09%, avg=22826.86, stdev=1073.60, samples=7 00:10:16.918 iops : min= 5307, max= 5916, avg=5706.71, stdev=268.40, samples=7 00:10:16.918 lat (usec) : 250=99.82%, 500=0.09%, 750=0.03%, 1000=0.02% 00:10:16.918 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:10:16.918 cpu : usr=1.48%, sys=6.45%, ctx=20744, majf=0, minf=2 00:10:16.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 issued rwts: total=20730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.918 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63842: Sat Dec 7 04:25:20 2024 00:10:16.918 read: IOPS=3468, BW=13.5MiB/s (14.2MB/s)(42.8MiB/3159msec) 00:10:16.918 slat (usec): min=10, max=18722, avg=18.71, stdev=234.10 00:10:16.918 clat (usec): min=140, max=2639, avg=268.05, stdev=50.59 00:10:16.918 lat (usec): min=152, max=18900, avg=286.75, stdev=238.43 00:10:16.918 clat percentiles (usec): 00:10:16.918 | 1.00th=[ 153], 5.00th=[ 172], 10.00th=[ 217], 20.00th=[ 255], 00:10:16.918 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:16.918 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:10:16.918 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 506], 99.95th=[ 1037], 00:10:16.918 | 99.99th=[ 1729] 00:10:16.918 bw ( KiB/s): min=13488, max=14096, per=20.39%, avg=13653.33, stdev=244.11, samples=6 00:10:16.918 iops : min= 3372, max= 3524, avg=3413.33, stdev=61.03, samples=6 00:10:16.918 lat (usec) : 250=15.93%, 500=83.96%, 750=0.05% 00:10:16.918 lat (msec) : 2=0.05%, 4=0.01% 00:10:16.918 cpu : usr=1.14%, sys=4.59%, ctx=10961, majf=0, minf=2 00:10:16.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 issued rwts: total=10958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.918 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63843: Sat Dec 7 04:25:20 2024 00:10:16.918 read: IOPS=3385, BW=13.2MiB/s (13.9MB/s)(38.6MiB/2923msec) 00:10:16.918 slat (usec): min=11, max=251, avg=14.75, stdev= 4.22 00:10:16.918 clat (usec): min=143, max=2913, avg=279.20, stdev=43.19 00:10:16.918 lat (usec): min=156, max=2938, avg=293.94, stdev=43.41 00:10:16.918 clat percentiles (usec): 00:10:16.918 | 1.00th=[ 237], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:10:16.918 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:10:16.918 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 310], 00:10:16.918 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 594], 99.95th=[ 783], 00:10:16.918 | 99.99th=[ 2900] 00:10:16.918 bw ( KiB/s): min=13424, max=13744, per=20.23%, avg=13550.40, stdev=127.80, samples=5 00:10:16.918 iops : min= 3356, max= 3436, avg=3387.60, stdev=31.95, samples=5 00:10:16.918 lat (usec) : 250=4.60%, 500=95.26%, 750=0.08%, 1000=0.02% 00:10:16.918 lat (msec) : 2=0.01%, 4=0.02% 00:10:16.918 cpu : usr=0.75%, sys=4.48%, ctx=9900, majf=0, minf=2 00:10:16.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:16.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.918 issued rwts: total=9895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:16.918 00:10:16.918 Run status group 0 (all jobs): 00:10:16.918 READ: bw=65.4MiB/s (68.6MB/s), 13.2MiB/s-22.4MiB/s (13.9MB/s-23.4MB/s), io=238MiB (250MB), run=2923-3643msec 00:10:16.918 00:10:16.918 Disk stats (read/write): 00:10:16.918 nvme0n1: ios=19281/0, merge=0/0, ticks=3055/0, in_queue=3055, util=95.08% 00:10:16.918 nvme0n2: ios=20568/0, merge=0/0, ticks=3301/0, in_queue=3301, util=94.86% 00:10:16.918 nvme0n3: ios=10749/0, merge=0/0, ticks=2920/0, in_queue=2920, util=95.78% 00:10:16.919 nvme0n4: ios=9712/0, merge=0/0, ticks=2743/0, in_queue=2743, util=96.73% 00:10:17.177 04:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.177 04:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:17.435 04:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.435 04:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:17.694 04:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.694 04:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:17.953 04:25:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.953 04:25:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:18.211 04:25:21 -- target/fio.sh@69 -- # fio_status=0 00:10:18.211 04:25:21 -- target/fio.sh@70 -- # wait 63794 00:10:18.211 04:25:21 -- target/fio.sh@70 -- # fio_status=4 00:10:18.211 04:25:21 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.211 04:25:21 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.211 04:25:21 -- common/autotest_common.sh@1208 -- # local i=0 00:10:18.211 04:25:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:18.211 04:25:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.211 04:25:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:18.211 04:25:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.211 04:25:21 -- common/autotest_common.sh@1220 -- # return 0 00:10:18.211 04:25:21 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:18.211 nvmf hotplug test: fio failed as expected 00:10:18.211 04:25:21 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:18.211 04:25:21 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.469 04:25:21 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:18.469 04:25:21 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:18.469 04:25:21 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:18.469 04:25:21 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:18.469 04:25:21 -- target/fio.sh@91 -- # nvmftestfini 00:10:18.469 04:25:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:18.469 04:25:21 -- nvmf/common.sh@116 -- # sync 00:10:18.469 04:25:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:18.469 04:25:21 -- nvmf/common.sh@119 -- # set +e 00:10:18.469 04:25:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:18.469 04:25:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:18.469 rmmod nvme_tcp 00:10:18.469 rmmod nvme_fabrics 00:10:18.469 rmmod nvme_keyring 00:10:18.469 04:25:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:18.469 04:25:21 -- nvmf/common.sh@123 -- # set -e 00:10:18.469 04:25:21 -- nvmf/common.sh@124 -- # return 0 00:10:18.469 04:25:21 -- nvmf/common.sh@477 -- # '[' -n 63411 ']' 00:10:18.469 04:25:21 -- nvmf/common.sh@478 -- # killprocess 63411 00:10:18.469 04:25:21 -- common/autotest_common.sh@936 -- # '[' -z 63411 ']' 00:10:18.469 04:25:21 -- common/autotest_common.sh@940 -- # kill -0 63411 00:10:18.469 04:25:21 -- common/autotest_common.sh@941 -- # uname 00:10:18.469 04:25:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:18.469 04:25:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63411 00:10:18.469 04:25:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:18.469 04:25:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:18.469 killing process with pid 63411 00:10:18.469 04:25:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63411' 00:10:18.469 04:25:21 -- common/autotest_common.sh@955 -- # kill 63411 00:10:18.469 04:25:21 -- common/autotest_common.sh@960 -- # wait 63411 00:10:18.728 04:25:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:18.728 04:25:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:18.728 04:25:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:18.728 04:25:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.728 04:25:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:18.728 04:25:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.728 04:25:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.728 04:25:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.728 04:25:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:18.728 ************************************ 00:10:18.728 END TEST nvmf_fio_target 00:10:18.728 ************************************ 00:10:18.728 00:10:18.728 real 0m19.216s 00:10:18.728 user 1m12.230s 00:10:18.728 sys 0m10.312s 00:10:18.728 04:25:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.728 04:25:21 -- common/autotest_common.sh@10 -- # set +x 00:10:18.728 04:25:21 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:18.728 04:25:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:18.728 04:25:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.728 04:25:21 -- common/autotest_common.sh@10 -- # set +x 00:10:18.728 ************************************ 00:10:18.728 START TEST nvmf_bdevio 00:10:18.728 ************************************ 00:10:18.728 04:25:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:18.987 * Looking for test storage... 00:10:18.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.987 04:25:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:18.987 04:25:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:18.987 04:25:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:18.987 04:25:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:18.987 04:25:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:18.987 04:25:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:18.987 04:25:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:18.987 04:25:22 -- scripts/common.sh@335 -- # IFS=.-: 00:10:18.987 04:25:22 -- scripts/common.sh@335 -- # read -ra ver1 00:10:18.987 04:25:22 -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.987 04:25:22 -- scripts/common.sh@336 -- # read -ra ver2 00:10:18.987 04:25:22 -- scripts/common.sh@337 -- # local 'op=<' 00:10:18.987 04:25:22 -- scripts/common.sh@339 -- # ver1_l=2 00:10:18.987 04:25:22 -- scripts/common.sh@340 -- # ver2_l=1 00:10:18.987 04:25:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:18.987 04:25:22 -- scripts/common.sh@343 -- # case "$op" in 00:10:18.987 04:25:22 -- scripts/common.sh@344 -- # : 1 00:10:18.987 04:25:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:18.987 04:25:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.987 04:25:22 -- scripts/common.sh@364 -- # decimal 1 00:10:18.987 04:25:22 -- scripts/common.sh@352 -- # local d=1 00:10:18.987 04:25:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.987 04:25:22 -- scripts/common.sh@354 -- # echo 1 00:10:18.987 04:25:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:18.987 04:25:22 -- scripts/common.sh@365 -- # decimal 2 00:10:18.987 04:25:22 -- scripts/common.sh@352 -- # local d=2 00:10:18.987 04:25:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.987 04:25:22 -- scripts/common.sh@354 -- # echo 2 00:10:18.987 04:25:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:18.987 04:25:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:18.987 04:25:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:18.987 04:25:22 -- scripts/common.sh@367 -- # return 0 00:10:18.987 04:25:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.987 04:25:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 04:25:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 04:25:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 04:25:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:18.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.987 --rc genhtml_branch_coverage=1 00:10:18.987 --rc genhtml_function_coverage=1 00:10:18.987 --rc genhtml_legend=1 00:10:18.987 --rc geninfo_all_blocks=1 00:10:18.987 --rc geninfo_unexecuted_blocks=1 00:10:18.987 00:10:18.987 ' 00:10:18.987 04:25:22 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.987 04:25:22 -- nvmf/common.sh@7 -- # uname -s 00:10:18.987 04:25:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.987 04:25:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.987 04:25:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.987 04:25:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.987 04:25:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.987 04:25:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.987 04:25:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.987 04:25:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.987 04:25:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.987 04:25:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.987 04:25:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:10:18.987 04:25:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:10:18.987 04:25:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.987 04:25:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.987 04:25:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.987 04:25:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.987 04:25:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.987 04:25:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.987 04:25:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.987 04:25:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 04:25:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 04:25:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 04:25:22 -- paths/export.sh@5 -- # export PATH 00:10:18.987 04:25:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.987 04:25:22 -- nvmf/common.sh@46 -- # : 0 00:10:18.987 04:25:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:18.987 04:25:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:18.987 04:25:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:18.987 04:25:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.987 04:25:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.987 04:25:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:18.987 04:25:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:18.987 04:25:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:18.987 04:25:22 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.987 04:25:22 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.987 04:25:22 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:18.987 04:25:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:18.987 04:25:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.987 04:25:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:18.987 04:25:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:18.987 04:25:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:18.987 04:25:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.987 04:25:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.987 04:25:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.987 04:25:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:18.987 04:25:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:18.987 04:25:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:18.987 04:25:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:18.987 04:25:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:18.987 04:25:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:18.987 04:25:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.987 04:25:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.987 04:25:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.987 04:25:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:18.987 04:25:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.987 04:25:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.987 04:25:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.987 04:25:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.987 04:25:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.987 04:25:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.987 04:25:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.987 04:25:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.987 04:25:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:18.987 04:25:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:18.987 Cannot find device "nvmf_tgt_br" 00:10:18.987 04:25:22 -- nvmf/common.sh@154 -- # true 00:10:18.987 04:25:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.987 Cannot find device "nvmf_tgt_br2" 00:10:18.987 04:25:22 -- nvmf/common.sh@155 -- # true 00:10:18.987 04:25:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:18.987 04:25:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:18.987 Cannot find device "nvmf_tgt_br" 00:10:18.987 04:25:22 -- nvmf/common.sh@157 -- # true 00:10:18.987 04:25:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:18.987 Cannot find device "nvmf_tgt_br2" 00:10:18.987 04:25:22 -- nvmf/common.sh@158 -- # true 00:10:18.987 04:25:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:18.987 04:25:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:19.246 04:25:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.246 04:25:22 -- nvmf/common.sh@161 -- # true 00:10:19.246 04:25:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.246 04:25:22 -- nvmf/common.sh@162 -- # true 00:10:19.246 04:25:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.246 04:25:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.246 04:25:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.246 04:25:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.246 04:25:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.246 04:25:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.246 04:25:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.246 04:25:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:19.246 04:25:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:19.246 04:25:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:19.246 04:25:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:19.246 04:25:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:19.246 04:25:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:19.246 04:25:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.246 04:25:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.246 04:25:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.246 04:25:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:19.246 04:25:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:19.246 04:25:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.246 04:25:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.246 04:25:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.246 04:25:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.246 04:25:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.246 04:25:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:19.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:19.246 00:10:19.246 --- 10.0.0.2 ping statistics --- 00:10:19.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.246 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:19.246 04:25:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:19.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:10:19.246 00:10:19.246 --- 10.0.0.3 ping statistics --- 00:10:19.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.246 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:19.246 04:25:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:19.246 00:10:19.246 --- 10.0.0.1 ping statistics --- 00:10:19.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.246 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:19.246 04:25:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.246 04:25:22 -- nvmf/common.sh@421 -- # return 0 00:10:19.246 04:25:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:19.246 04:25:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.246 04:25:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:19.246 04:25:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:19.246 04:25:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.246 04:25:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:19.246 04:25:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:19.246 04:25:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:19.246 04:25:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:19.246 04:25:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.246 04:25:22 -- common/autotest_common.sh@10 -- # set +x 00:10:19.246 04:25:22 -- nvmf/common.sh@469 -- # nvmfpid=64114 00:10:19.246 04:25:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:19.246 04:25:22 -- nvmf/common.sh@470 -- # waitforlisten 64114 00:10:19.246 04:25:22 -- common/autotest_common.sh@829 -- # '[' -z 64114 ']' 00:10:19.246 04:25:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.246 04:25:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.246 04:25:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.246 04:25:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.246 04:25:22 -- common/autotest_common.sh@10 -- # set +x 00:10:19.504 [2024-12-07 04:25:22.510304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:19.504 [2024-12-07 04:25:22.510397] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.504 [2024-12-07 04:25:22.643621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.504 [2024-12-07 04:25:22.699674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.504 [2024-12-07 04:25:22.699863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.504 [2024-12-07 04:25:22.699877] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.504 [2024-12-07 04:25:22.699885] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.504 [2024-12-07 04:25:22.700150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:19.504 [2024-12-07 04:25:22.700274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:19.504 [2024-12-07 04:25:22.700276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.504 [2024-12-07 04:25:22.699993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:20.435 04:25:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.435 04:25:23 -- common/autotest_common.sh@862 -- # return 0 00:10:20.435 04:25:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:20.435 04:25:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.435 04:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 04:25:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.435 04:25:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.435 04:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.435 04:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 [2024-12-07 04:25:23.499255] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.435 04:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.435 04:25:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.435 04:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.435 04:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 Malloc0 00:10:20.435 04:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.435 04:25:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.435 04:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.435 04:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 04:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.435 04:25:23 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.435 04:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.435 04:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 04:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.435 04:25:23 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.435 04:25:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.435 04:25:23 -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 [2024-12-07 04:25:23.563046] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.435 04:25:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.435 04:25:23 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:20.435 04:25:23 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:20.435 04:25:23 -- nvmf/common.sh@520 -- # config=() 00:10:20.435 04:25:23 -- nvmf/common.sh@520 -- # local subsystem config 00:10:20.435 04:25:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:20.435 04:25:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:20.435 { 00:10:20.435 "params": { 00:10:20.435 "name": "Nvme$subsystem", 00:10:20.435 "trtype": "$TEST_TRANSPORT", 00:10:20.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.435 "adrfam": "ipv4", 00:10:20.435 "trsvcid": "$NVMF_PORT", 00:10:20.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.435 "hdgst": ${hdgst:-false}, 00:10:20.435 "ddgst": ${ddgst:-false} 00:10:20.435 }, 00:10:20.435 "method": "bdev_nvme_attach_controller" 00:10:20.435 } 00:10:20.435 EOF 00:10:20.435 )") 00:10:20.435 04:25:23 -- nvmf/common.sh@542 -- # cat 00:10:20.435 04:25:23 -- nvmf/common.sh@544 -- # jq . 00:10:20.435 04:25:23 -- nvmf/common.sh@545 -- # IFS=, 00:10:20.435 04:25:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:20.435 "params": { 00:10:20.435 "name": "Nvme1", 00:10:20.435 "trtype": "tcp", 00:10:20.435 "traddr": "10.0.0.2", 00:10:20.435 "adrfam": "ipv4", 00:10:20.436 "trsvcid": "4420", 00:10:20.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.436 "hdgst": false, 00:10:20.436 "ddgst": false 00:10:20.436 }, 00:10:20.436 "method": "bdev_nvme_attach_controller" 00:10:20.436 }' 00:10:20.436 [2024-12-07 04:25:23.623605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:20.436 [2024-12-07 04:25:23.623722] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64150 ] 00:10:20.693 [2024-12-07 04:25:23.761643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.693 [2024-12-07 04:25:23.819195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.693 [2024-12-07 04:25:23.819340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.693 [2024-12-07 04:25:23.819344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.951 [2024-12-07 04:25:23.949121] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:20.951 [2024-12-07 04:25:23.949159] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:20.951 I/O targets: 00:10:20.951 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:20.951 00:10:20.951 00:10:20.951 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.951 http://cunit.sourceforge.net/ 00:10:20.951 00:10:20.951 00:10:20.951 Suite: bdevio tests on: Nvme1n1 00:10:20.951 Test: blockdev write read block ...passed 00:10:20.952 Test: blockdev write zeroes read block ...passed 00:10:20.952 Test: blockdev write zeroes read no split ...passed 00:10:20.952 Test: blockdev write zeroes read split ...passed 00:10:20.952 Test: blockdev write zeroes read split partial ...passed 00:10:20.952 Test: blockdev reset ...[2024-12-07 04:25:23.982675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:20.952 [2024-12-07 04:25:23.982779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c7c80 (9): Bad file descriptor 00:10:20.952 [2024-12-07 04:25:23.999342] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.952 passed 00:10:20.952 Test: blockdev write read 8 blocks ...passed 00:10:20.952 Test: blockdev write read size > 128k ...passed 00:10:20.952 Test: blockdev write read invalid size ...passed 00:10:20.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.952 Test: blockdev write read max offset ...passed 00:10:20.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.952 Test: blockdev writev readv 8 blocks ...passed 00:10:20.952 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.952 Test: blockdev writev readv block ...passed 00:10:20.952 Test: blockdev writev readv size > 128k ...passed 00:10:20.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.952 Test: blockdev comparev and writev ...[2024-12-07 04:25:24.009744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.009938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.010071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.010190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.010671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.010830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.010946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.011058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.011573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.011720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.011840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.011951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.012462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.012601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.012755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.952 [2024-12-07 04:25:24.012851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:20.952 passed 00:10:20.952 Test: blockdev nvme passthru rw ...passed 00:10:20.952 Test: blockdev nvme passthru vendor specific ...[2024-12-07 04:25:24.014108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.952 [2024-12-07 04:25:24.014262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.014535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.952 [2024-12-07 04:25:24.014655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.014916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.952 [2024-12-07 04:25:24.015043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:20.952 [2024-12-07 04:25:24.015297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.952 [2024-12-07 04:25:24.015431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:20.952 passed 00:10:20.952 Test: blockdev nvme admin passthru ...passed 00:10:20.952 Test: blockdev copy ...passed 00:10:20.952 00:10:20.952 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.952 suites 1 1 n/a 0 0 00:10:20.952 tests 23 23 23 0 0 00:10:20.952 asserts 152 152 152 0 n/a 00:10:20.952 00:10:20.952 Elapsed time = 0.168 seconds 00:10:21.211 04:25:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.211 04:25:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.211 04:25:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 04:25:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.211 04:25:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:21.211 04:25:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:21.211 04:25:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:21.211 04:25:24 -- nvmf/common.sh@116 -- # sync 00:10:21.211 04:25:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:21.211 04:25:24 -- nvmf/common.sh@119 -- # set +e 00:10:21.211 04:25:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:21.211 04:25:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:21.211 rmmod nvme_tcp 00:10:21.211 rmmod nvme_fabrics 00:10:21.211 rmmod nvme_keyring 00:10:21.211 04:25:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:21.211 04:25:24 -- nvmf/common.sh@123 -- # set -e 00:10:21.211 04:25:24 -- nvmf/common.sh@124 -- # return 0 00:10:21.211 04:25:24 -- nvmf/common.sh@477 -- # '[' -n 64114 ']' 00:10:21.211 04:25:24 -- nvmf/common.sh@478 -- # killprocess 64114 00:10:21.211 04:25:24 -- common/autotest_common.sh@936 -- # '[' -z 64114 ']' 00:10:21.211 04:25:24 -- common/autotest_common.sh@940 -- # kill -0 64114 00:10:21.211 04:25:24 -- common/autotest_common.sh@941 -- # uname 00:10:21.211 04:25:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:21.211 04:25:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64114 00:10:21.211 04:25:24 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:21.211 04:25:24 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:21.211 04:25:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64114' 00:10:21.211 killing process with pid 64114 00:10:21.211 04:25:24 -- common/autotest_common.sh@955 -- # kill 64114 00:10:21.211 04:25:24 -- common/autotest_common.sh@960 -- # wait 64114 00:10:21.470 04:25:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:21.470 04:25:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:21.470 04:25:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:21.470 04:25:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.470 04:25:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:21.470 04:25:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.470 04:25:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.470 04:25:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.470 04:25:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:21.470 00:10:21.470 real 0m2.609s 00:10:21.470 user 0m8.379s 00:10:21.470 sys 0m0.645s 00:10:21.470 04:25:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:21.470 ************************************ 00:10:21.470 END TEST nvmf_bdevio 00:10:21.470 04:25:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.470 ************************************ 00:10:21.470 04:25:24 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:21.470 04:25:24 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:21.470 04:25:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:21.470 04:25:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.470 04:25:24 -- common/autotest_common.sh@10 -- # set +x 00:10:21.470 ************************************ 00:10:21.470 START TEST nvmf_bdevio_no_huge 00:10:21.470 ************************************ 00:10:21.470 04:25:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:21.470 * Looking for test storage... 00:10:21.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.470 04:25:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:21.470 04:25:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:21.470 04:25:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:21.730 04:25:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:21.730 04:25:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:21.730 04:25:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:21.730 04:25:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:21.730 04:25:24 -- scripts/common.sh@335 -- # IFS=.-: 00:10:21.730 04:25:24 -- scripts/common.sh@335 -- # read -ra ver1 00:10:21.730 04:25:24 -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.730 04:25:24 -- scripts/common.sh@336 -- # read -ra ver2 00:10:21.730 04:25:24 -- scripts/common.sh@337 -- # local 'op=<' 00:10:21.730 04:25:24 -- scripts/common.sh@339 -- # ver1_l=2 00:10:21.730 04:25:24 -- scripts/common.sh@340 -- # ver2_l=1 00:10:21.730 04:25:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:21.730 04:25:24 -- scripts/common.sh@343 -- # case "$op" in 00:10:21.730 04:25:24 -- scripts/common.sh@344 -- # : 1 00:10:21.730 04:25:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:21.730 04:25:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.730 04:25:24 -- scripts/common.sh@364 -- # decimal 1 00:10:21.730 04:25:24 -- scripts/common.sh@352 -- # local d=1 00:10:21.730 04:25:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.730 04:25:24 -- scripts/common.sh@354 -- # echo 1 00:10:21.730 04:25:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:21.730 04:25:24 -- scripts/common.sh@365 -- # decimal 2 00:10:21.730 04:25:24 -- scripts/common.sh@352 -- # local d=2 00:10:21.730 04:25:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.730 04:25:24 -- scripts/common.sh@354 -- # echo 2 00:10:21.730 04:25:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:21.730 04:25:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:21.730 04:25:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:21.730 04:25:24 -- scripts/common.sh@367 -- # return 0 00:10:21.730 04:25:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.730 04:25:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:21.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.730 --rc genhtml_branch_coverage=1 00:10:21.730 --rc genhtml_function_coverage=1 00:10:21.730 --rc genhtml_legend=1 00:10:21.730 --rc geninfo_all_blocks=1 00:10:21.730 --rc geninfo_unexecuted_blocks=1 00:10:21.730 00:10:21.730 ' 00:10:21.730 04:25:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:21.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.730 --rc genhtml_branch_coverage=1 00:10:21.730 --rc genhtml_function_coverage=1 00:10:21.730 --rc genhtml_legend=1 00:10:21.730 --rc geninfo_all_blocks=1 00:10:21.730 --rc geninfo_unexecuted_blocks=1 00:10:21.730 00:10:21.730 ' 00:10:21.730 04:25:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:21.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.730 --rc genhtml_branch_coverage=1 00:10:21.730 --rc genhtml_function_coverage=1 00:10:21.730 --rc genhtml_legend=1 00:10:21.730 --rc geninfo_all_blocks=1 00:10:21.730 --rc geninfo_unexecuted_blocks=1 00:10:21.730 00:10:21.730 ' 00:10:21.730 04:25:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:21.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.730 --rc genhtml_branch_coverage=1 00:10:21.730 --rc genhtml_function_coverage=1 00:10:21.730 --rc genhtml_legend=1 00:10:21.730 --rc geninfo_all_blocks=1 00:10:21.730 --rc geninfo_unexecuted_blocks=1 00:10:21.730 00:10:21.730 ' 00:10:21.730 04:25:24 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.730 04:25:24 -- nvmf/common.sh@7 -- # uname -s 00:10:21.730 04:25:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.730 04:25:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.730 04:25:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.730 04:25:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.730 04:25:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.730 04:25:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.730 04:25:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.730 04:25:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.730 04:25:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.730 04:25:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.730 04:25:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:10:21.730 04:25:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:10:21.730 04:25:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.730 04:25:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.730 04:25:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.730 04:25:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.730 04:25:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.730 04:25:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.730 04:25:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.730 04:25:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.730 04:25:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.730 04:25:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.730 04:25:24 -- paths/export.sh@5 -- # export PATH 00:10:21.731 04:25:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.731 04:25:24 -- nvmf/common.sh@46 -- # : 0 00:10:21.731 04:25:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:21.731 04:25:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:21.731 04:25:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:21.731 04:25:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.731 04:25:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.731 04:25:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:21.731 04:25:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:21.731 04:25:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:21.731 04:25:24 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.731 04:25:24 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.731 04:25:24 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:21.731 04:25:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:21.731 04:25:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.731 04:25:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:21.731 04:25:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:21.731 04:25:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:21.731 04:25:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.731 04:25:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.731 04:25:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.731 04:25:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:21.731 04:25:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:21.731 04:25:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:21.731 04:25:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:21.731 04:25:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:21.731 04:25:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:21.731 04:25:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.731 04:25:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.731 04:25:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.731 04:25:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:21.731 04:25:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.731 04:25:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.731 04:25:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.731 04:25:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.731 04:25:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.731 04:25:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.731 04:25:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.731 04:25:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.731 04:25:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:21.731 04:25:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:21.731 Cannot find device "nvmf_tgt_br" 00:10:21.731 04:25:24 -- nvmf/common.sh@154 -- # true 00:10:21.731 04:25:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.731 Cannot find device "nvmf_tgt_br2" 00:10:21.731 04:25:24 -- nvmf/common.sh@155 -- # true 00:10:21.731 04:25:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:21.731 04:25:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:21.731 Cannot find device "nvmf_tgt_br" 00:10:21.731 04:25:24 -- nvmf/common.sh@157 -- # true 00:10:21.731 04:25:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:21.731 Cannot find device "nvmf_tgt_br2" 00:10:21.731 04:25:24 -- nvmf/common.sh@158 -- # true 00:10:21.731 04:25:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:21.731 04:25:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:21.731 04:25:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.731 04:25:24 -- nvmf/common.sh@161 -- # true 00:10:21.731 04:25:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.731 04:25:24 -- nvmf/common.sh@162 -- # true 00:10:21.731 04:25:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.731 04:25:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.731 04:25:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.731 04:25:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.731 04:25:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.991 04:25:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.991 04:25:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.991 04:25:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:21.991 04:25:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:21.991 04:25:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:21.991 04:25:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:21.991 04:25:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:21.991 04:25:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:21.991 04:25:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:21.991 04:25:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:21.991 04:25:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:21.991 04:25:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:21.991 04:25:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:21.991 04:25:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.991 04:25:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.991 04:25:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.991 04:25:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.991 04:25:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.991 04:25:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:21.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:10:21.991 00:10:21.991 --- 10.0.0.2 ping statistics --- 00:10:21.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.991 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:21.991 04:25:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:21.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:21.991 00:10:21.991 --- 10.0.0.3 ping statistics --- 00:10:21.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.991 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:21.991 04:25:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:21.991 00:10:21.991 --- 10.0.0.1 ping statistics --- 00:10:21.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.991 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:21.991 04:25:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.991 04:25:25 -- nvmf/common.sh@421 -- # return 0 00:10:21.991 04:25:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:21.991 04:25:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.991 04:25:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:21.991 04:25:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:21.991 04:25:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.991 04:25:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:21.991 04:25:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:21.991 04:25:25 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:21.991 04:25:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:21.991 04:25:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.991 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 04:25:25 -- nvmf/common.sh@469 -- # nvmfpid=64324 00:10:21.991 04:25:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:21.991 04:25:25 -- nvmf/common.sh@470 -- # waitforlisten 64324 00:10:21.991 04:25:25 -- common/autotest_common.sh@829 -- # '[' -z 64324 ']' 00:10:21.991 04:25:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.991 04:25:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.991 04:25:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.991 04:25:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.991 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 [2024-12-07 04:25:25.213310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.991 [2024-12-07 04:25:25.213415] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:22.249 [2024-12-07 04:25:25.365908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.249 [2024-12-07 04:25:25.464475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.249 [2024-12-07 04:25:25.464629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.249 [2024-12-07 04:25:25.464642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.249 [2024-12-07 04:25:25.464650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.249 [2024-12-07 04:25:25.464841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.249 [2024-12-07 04:25:25.465261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:22.249 [2024-12-07 04:25:25.465409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:22.249 [2024-12-07 04:25:25.465428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.182 04:25:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.182 04:25:26 -- common/autotest_common.sh@862 -- # return 0 00:10:23.182 04:25:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:23.182 04:25:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:23.182 04:25:26 -- common/autotest_common.sh@10 -- # set +x 00:10:23.182 04:25:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.182 04:25:26 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.182 04:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.182 04:25:26 -- common/autotest_common.sh@10 -- # set +x 00:10:23.182 [2024-12-07 04:25:26.258174] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.182 04:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.182 04:25:26 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.182 04:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.182 04:25:26 -- common/autotest_common.sh@10 -- # set +x 00:10:23.182 Malloc0 00:10:23.182 04:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.182 04:25:26 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:23.182 04:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.182 04:25:26 -- common/autotest_common.sh@10 -- # set +x 00:10:23.182 04:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.182 04:25:26 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.182 04:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.182 04:25:26 -- common/autotest_common.sh@10 -- # set +x 00:10:23.182 04:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.182 04:25:26 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.182 04:25:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.182 04:25:26 -- common/autotest_common.sh@10 -- # set +x 00:10:23.182 [2024-12-07 04:25:26.300389] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.182 04:25:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.182 04:25:26 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:23.182 04:25:26 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:23.182 04:25:26 -- nvmf/common.sh@520 -- # config=() 00:10:23.182 04:25:26 -- nvmf/common.sh@520 -- # local subsystem config 00:10:23.182 04:25:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:23.182 04:25:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:23.182 { 00:10:23.182 "params": { 00:10:23.182 "name": "Nvme$subsystem", 00:10:23.182 "trtype": "$TEST_TRANSPORT", 00:10:23.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:23.182 "adrfam": "ipv4", 00:10:23.182 "trsvcid": "$NVMF_PORT", 00:10:23.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:23.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:23.182 "hdgst": ${hdgst:-false}, 00:10:23.182 "ddgst": ${ddgst:-false} 00:10:23.182 }, 00:10:23.182 "method": "bdev_nvme_attach_controller" 00:10:23.182 } 00:10:23.182 EOF 00:10:23.182 )") 00:10:23.182 04:25:26 -- nvmf/common.sh@542 -- # cat 00:10:23.182 04:25:26 -- nvmf/common.sh@544 -- # jq . 00:10:23.182 04:25:26 -- nvmf/common.sh@545 -- # IFS=, 00:10:23.182 04:25:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:23.182 "params": { 00:10:23.182 "name": "Nvme1", 00:10:23.182 "trtype": "tcp", 00:10:23.182 "traddr": "10.0.0.2", 00:10:23.182 "adrfam": "ipv4", 00:10:23.182 "trsvcid": "4420", 00:10:23.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:23.182 "hdgst": false, 00:10:23.182 "ddgst": false 00:10:23.182 }, 00:10:23.182 "method": "bdev_nvme_attach_controller" 00:10:23.182 }' 00:10:23.182 [2024-12-07 04:25:26.360310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:23.182 [2024-12-07 04:25:26.360405] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64367 ] 00:10:23.439 [2024-12-07 04:25:26.517310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.439 [2024-12-07 04:25:26.665161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.439 [2024-12-07 04:25:26.665297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.439 [2024-12-07 04:25:26.665300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.697 [2024-12-07 04:25:26.830849] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:23.697 [2024-12-07 04:25:26.830890] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:23.697 I/O targets: 00:10:23.697 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:23.697 00:10:23.697 00:10:23.697 CUnit - A unit testing framework for C - Version 2.1-3 00:10:23.697 http://cunit.sourceforge.net/ 00:10:23.697 00:10:23.697 00:10:23.697 Suite: bdevio tests on: Nvme1n1 00:10:23.697 Test: blockdev write read block ...passed 00:10:23.697 Test: blockdev write zeroes read block ...passed 00:10:23.697 Test: blockdev write zeroes read no split ...passed 00:10:23.697 Test: blockdev write zeroes read split ...passed 00:10:23.697 Test: blockdev write zeroes read split partial ...passed 00:10:23.697 Test: blockdev reset ...[2024-12-07 04:25:26.869869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:23.697 [2024-12-07 04:25:26.869971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2010680 (9): Bad file descriptor 00:10:23.697 [2024-12-07 04:25:26.889468] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:23.697 passed 00:10:23.697 Test: blockdev write read 8 blocks ...passed 00:10:23.697 Test: blockdev write read size > 128k ...passed 00:10:23.697 Test: blockdev write read invalid size ...passed 00:10:23.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:23.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:23.697 Test: blockdev write read max offset ...passed 00:10:23.697 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:23.697 Test: blockdev writev readv 8 blocks ...passed 00:10:23.697 Test: blockdev writev readv 30 x 1block ...passed 00:10:23.697 Test: blockdev writev readv block ...passed 00:10:23.697 Test: blockdev writev readv size > 128k ...passed 00:10:23.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:23.697 Test: blockdev comparev and writev ...[2024-12-07 04:25:26.901053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.901224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.901366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.901481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.902169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.902293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.902395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.902910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.903050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.903159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.903262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.903787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.903907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.904004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.697 [2024-12-07 04:25:26.904109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:23.697 passed 00:10:23.697 Test: blockdev nvme passthru rw ...passed 00:10:23.697 Test: blockdev nvme passthru vendor specific ...[2024-12-07 04:25:26.905242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.697 [2024-12-07 04:25:26.905387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.905686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.697 [2024-12-07 04:25:26.905811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.906106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.697 [2024-12-07 04:25:26.906220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:23.697 [2024-12-07 04:25:26.906532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.697 [2024-12-07 04:25:26.906681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:23.697 passed 00:10:23.697 Test: blockdev nvme admin passthru ...passed 00:10:23.697 Test: blockdev copy ...passed 00:10:23.697 00:10:23.697 Run Summary: Type Total Ran Passed Failed Inactive 00:10:23.697 suites 1 1 n/a 0 0 00:10:23.697 tests 23 23 23 0 0 00:10:23.697 asserts 152 152 152 0 n/a 00:10:23.697 00:10:23.697 Elapsed time = 0.175 seconds 00:10:24.262 04:25:27 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.262 04:25:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.262 04:25:27 -- common/autotest_common.sh@10 -- # set +x 00:10:24.262 04:25:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.262 04:25:27 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:24.262 04:25:27 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:24.262 04:25:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:24.262 04:25:27 -- nvmf/common.sh@116 -- # sync 00:10:24.262 04:25:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:24.262 04:25:27 -- nvmf/common.sh@119 -- # set +e 00:10:24.262 04:25:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:24.262 04:25:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:24.262 rmmod nvme_tcp 00:10:24.262 rmmod nvme_fabrics 00:10:24.262 rmmod nvme_keyring 00:10:24.262 04:25:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:24.262 04:25:27 -- nvmf/common.sh@123 -- # set -e 00:10:24.262 04:25:27 -- nvmf/common.sh@124 -- # return 0 00:10:24.262 04:25:27 -- nvmf/common.sh@477 -- # '[' -n 64324 ']' 00:10:24.262 04:25:27 -- nvmf/common.sh@478 -- # killprocess 64324 00:10:24.262 04:25:27 -- common/autotest_common.sh@936 -- # '[' -z 64324 ']' 00:10:24.262 04:25:27 -- common/autotest_common.sh@940 -- # kill -0 64324 00:10:24.262 04:25:27 -- common/autotest_common.sh@941 -- # uname 00:10:24.262 04:25:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:24.262 04:25:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64324 00:10:24.262 04:25:27 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:24.262 04:25:27 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:24.262 04:25:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64324' 00:10:24.262 killing process with pid 64324 00:10:24.262 04:25:27 -- common/autotest_common.sh@955 -- # kill 64324 00:10:24.262 04:25:27 -- common/autotest_common.sh@960 -- # wait 64324 00:10:24.537 04:25:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:24.537 04:25:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:24.537 04:25:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:24.537 04:25:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.537 04:25:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:24.537 04:25:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.537 04:25:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.537 04:25:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.805 04:25:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:24.805 00:10:24.805 real 0m3.184s 00:10:24.805 user 0m10.304s 00:10:24.805 sys 0m1.143s 00:10:24.805 04:25:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:24.805 04:25:27 -- common/autotest_common.sh@10 -- # set +x 00:10:24.805 ************************************ 00:10:24.805 END TEST nvmf_bdevio_no_huge 00:10:24.805 ************************************ 00:10:24.805 04:25:27 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:24.805 04:25:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:24.805 04:25:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.805 04:25:27 -- common/autotest_common.sh@10 -- # set +x 00:10:24.805 ************************************ 00:10:24.805 START TEST nvmf_tls 00:10:24.805 ************************************ 00:10:24.805 04:25:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:24.805 * Looking for test storage... 00:10:24.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:24.805 04:25:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:24.805 04:25:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:24.805 04:25:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:24.805 04:25:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:24.805 04:25:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:24.805 04:25:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:24.805 04:25:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:24.805 04:25:27 -- scripts/common.sh@335 -- # IFS=.-: 00:10:24.805 04:25:27 -- scripts/common.sh@335 -- # read -ra ver1 00:10:24.805 04:25:27 -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.805 04:25:27 -- scripts/common.sh@336 -- # read -ra ver2 00:10:24.805 04:25:27 -- scripts/common.sh@337 -- # local 'op=<' 00:10:24.805 04:25:27 -- scripts/common.sh@339 -- # ver1_l=2 00:10:24.805 04:25:27 -- scripts/common.sh@340 -- # ver2_l=1 00:10:24.805 04:25:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:24.805 04:25:27 -- scripts/common.sh@343 -- # case "$op" in 00:10:24.805 04:25:27 -- scripts/common.sh@344 -- # : 1 00:10:24.805 04:25:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:24.805 04:25:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.805 04:25:27 -- scripts/common.sh@364 -- # decimal 1 00:10:24.805 04:25:27 -- scripts/common.sh@352 -- # local d=1 00:10:24.805 04:25:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.805 04:25:27 -- scripts/common.sh@354 -- # echo 1 00:10:24.805 04:25:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:24.805 04:25:27 -- scripts/common.sh@365 -- # decimal 2 00:10:24.805 04:25:27 -- scripts/common.sh@352 -- # local d=2 00:10:24.805 04:25:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.805 04:25:27 -- scripts/common.sh@354 -- # echo 2 00:10:24.805 04:25:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:24.805 04:25:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:24.805 04:25:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:24.805 04:25:27 -- scripts/common.sh@367 -- # return 0 00:10:24.805 04:25:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.805 04:25:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.805 --rc genhtml_branch_coverage=1 00:10:24.805 --rc genhtml_function_coverage=1 00:10:24.805 --rc genhtml_legend=1 00:10:24.805 --rc geninfo_all_blocks=1 00:10:24.805 --rc geninfo_unexecuted_blocks=1 00:10:24.805 00:10:24.805 ' 00:10:24.805 04:25:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.805 --rc genhtml_branch_coverage=1 00:10:24.805 --rc genhtml_function_coverage=1 00:10:24.805 --rc genhtml_legend=1 00:10:24.805 --rc geninfo_all_blocks=1 00:10:24.805 --rc geninfo_unexecuted_blocks=1 00:10:24.805 00:10:24.805 ' 00:10:24.805 04:25:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.805 --rc genhtml_branch_coverage=1 00:10:24.805 --rc genhtml_function_coverage=1 00:10:24.805 --rc genhtml_legend=1 00:10:24.805 --rc geninfo_all_blocks=1 00:10:24.805 --rc geninfo_unexecuted_blocks=1 00:10:24.805 00:10:24.805 ' 00:10:24.805 04:25:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.805 --rc genhtml_branch_coverage=1 00:10:24.805 --rc genhtml_function_coverage=1 00:10:24.805 --rc genhtml_legend=1 00:10:24.805 --rc geninfo_all_blocks=1 00:10:24.805 --rc geninfo_unexecuted_blocks=1 00:10:24.805 00:10:24.805 ' 00:10:24.805 04:25:27 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.805 04:25:27 -- nvmf/common.sh@7 -- # uname -s 00:10:24.805 04:25:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.805 04:25:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.805 04:25:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.805 04:25:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.805 04:25:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.805 04:25:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.805 04:25:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.805 04:25:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.805 04:25:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.805 04:25:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.805 04:25:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:10:24.805 04:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:10:24.805 04:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.805 04:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.805 04:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.805 04:25:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.805 04:25:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.805 04:25:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.805 04:25:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.805 04:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.805 04:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.805 04:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.805 04:25:28 -- paths/export.sh@5 -- # export PATH 00:10:24.805 04:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.805 04:25:28 -- nvmf/common.sh@46 -- # : 0 00:10:24.805 04:25:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:24.805 04:25:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:24.805 04:25:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:24.805 04:25:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.805 04:25:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.805 04:25:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:24.805 04:25:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:24.805 04:25:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:24.805 04:25:28 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.805 04:25:28 -- target/tls.sh@71 -- # nvmftestinit 00:10:24.805 04:25:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:24.805 04:25:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.805 04:25:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:24.805 04:25:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:24.805 04:25:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:24.805 04:25:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.805 04:25:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.805 04:25:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.805 04:25:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:24.805 04:25:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:24.805 04:25:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:24.805 04:25:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:24.805 04:25:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:24.806 04:25:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:24.806 04:25:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.806 04:25:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.806 04:25:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:24.806 04:25:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:24.806 04:25:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:24.806 04:25:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:24.806 04:25:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:24.806 04:25:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.806 04:25:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:24.806 04:25:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:24.806 04:25:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:24.806 04:25:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:24.806 04:25:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:24.806 04:25:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:25.064 Cannot find device "nvmf_tgt_br" 00:10:25.064 04:25:28 -- nvmf/common.sh@154 -- # true 00:10:25.064 04:25:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.064 Cannot find device "nvmf_tgt_br2" 00:10:25.064 04:25:28 -- nvmf/common.sh@155 -- # true 00:10:25.064 04:25:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:25.064 04:25:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:25.064 Cannot find device "nvmf_tgt_br" 00:10:25.064 04:25:28 -- nvmf/common.sh@157 -- # true 00:10:25.064 04:25:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:25.064 Cannot find device "nvmf_tgt_br2" 00:10:25.064 04:25:28 -- nvmf/common.sh@158 -- # true 00:10:25.064 04:25:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:25.064 04:25:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:25.064 04:25:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.064 04:25:28 -- nvmf/common.sh@161 -- # true 00:10:25.064 04:25:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.064 04:25:28 -- nvmf/common.sh@162 -- # true 00:10:25.064 04:25:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.064 04:25:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.064 04:25:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:25.064 04:25:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:25.064 04:25:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:25.064 04:25:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:25.064 04:25:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:25.064 04:25:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:25.064 04:25:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:25.064 04:25:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:25.064 04:25:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:25.064 04:25:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:25.064 04:25:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:25.064 04:25:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.064 04:25:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.064 04:25:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.064 04:25:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:25.064 04:25:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:25.064 04:25:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.064 04:25:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.323 04:25:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.324 04:25:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.324 04:25:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.324 04:25:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:25.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:25.324 00:10:25.324 --- 10.0.0.2 ping statistics --- 00:10:25.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.324 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:25.324 04:25:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:25.324 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.324 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:10:25.324 00:10:25.324 --- 10.0.0.3 ping statistics --- 00:10:25.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.324 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:25.324 04:25:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:25.324 00:10:25.324 --- 10.0.0.1 ping statistics --- 00:10:25.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.324 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:25.324 04:25:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.324 04:25:28 -- nvmf/common.sh@421 -- # return 0 00:10:25.324 04:25:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:25.324 04:25:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.324 04:25:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:25.324 04:25:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:25.324 04:25:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.324 04:25:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:25.324 04:25:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:25.324 04:25:28 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:25.324 04:25:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:25.324 04:25:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.324 04:25:28 -- common/autotest_common.sh@10 -- # set +x 00:10:25.324 04:25:28 -- nvmf/common.sh@469 -- # nvmfpid=64554 00:10:25.324 04:25:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:25.324 04:25:28 -- nvmf/common.sh@470 -- # waitforlisten 64554 00:10:25.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.324 04:25:28 -- common/autotest_common.sh@829 -- # '[' -z 64554 ']' 00:10:25.324 04:25:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.324 04:25:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.324 04:25:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.324 04:25:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.324 04:25:28 -- common/autotest_common.sh@10 -- # set +x 00:10:25.324 [2024-12-07 04:25:28.425567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.324 [2024-12-07 04:25:28.425705] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.583 [2024-12-07 04:25:28.569664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.583 [2024-12-07 04:25:28.638505] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:25.583 [2024-12-07 04:25:28.638697] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.583 [2024-12-07 04:25:28.638715] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.583 [2024-12-07 04:25:28.638726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.583 [2024-12-07 04:25:28.638773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.152 04:25:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.152 04:25:29 -- common/autotest_common.sh@862 -- # return 0 00:10:26.152 04:25:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:26.152 04:25:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:26.152 04:25:29 -- common/autotest_common.sh@10 -- # set +x 00:10:26.152 04:25:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.152 04:25:29 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:26.152 04:25:29 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:26.412 true 00:10:26.671 04:25:29 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:26.671 04:25:29 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:26.930 04:25:29 -- target/tls.sh@82 -- # version=0 00:10:26.930 04:25:29 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:26.930 04:25:29 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:26.930 04:25:30 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:26.930 04:25:30 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:27.190 04:25:30 -- target/tls.sh@90 -- # version=13 00:10:27.190 04:25:30 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:27.190 04:25:30 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:27.449 04:25:30 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:27.449 04:25:30 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:27.708 04:25:30 -- target/tls.sh@98 -- # version=7 00:10:27.708 04:25:30 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:27.708 04:25:30 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:27.708 04:25:30 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:27.968 04:25:31 -- target/tls.sh@105 -- # ktls=false 00:10:27.968 04:25:31 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:27.968 04:25:31 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:28.227 04:25:31 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:28.227 04:25:31 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:28.487 04:25:31 -- target/tls.sh@113 -- # ktls=true 00:10:28.487 04:25:31 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:28.487 04:25:31 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:28.746 04:25:31 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:28.746 04:25:31 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:10:29.005 04:25:32 -- target/tls.sh@121 -- # ktls=false 00:10:29.005 04:25:32 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:10:29.005 04:25:32 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:10:29.005 04:25:32 -- target/tls.sh@49 -- # local key hash crc 00:10:29.005 04:25:32 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:10:29.005 04:25:32 -- target/tls.sh@51 -- # hash=01 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # gzip -1 -c 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # tail -c8 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # head -c 4 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # crc='p$H�' 00:10:29.005 04:25:32 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:29.005 04:25:32 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:10:29.005 04:25:32 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:29.005 04:25:32 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:29.005 04:25:32 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:10:29.005 04:25:32 -- target/tls.sh@49 -- # local key hash crc 00:10:29.005 04:25:32 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:10:29.005 04:25:32 -- target/tls.sh@51 -- # hash=01 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # gzip -1 -c 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # tail -c8 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # head -c 4 00:10:29.005 04:25:32 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:10:29.005 04:25:32 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:29.005 04:25:32 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:10:29.005 04:25:32 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:29.005 04:25:32 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:29.005 04:25:32 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:29.005 04:25:32 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:29.005 04:25:32 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:29.005 04:25:32 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:29.006 04:25:32 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:29.006 04:25:32 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:29.006 04:25:32 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:29.265 04:25:32 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:29.525 04:25:32 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:29.525 04:25:32 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:29.525 04:25:32 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:29.784 [2024-12-07 04:25:32.872672] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.784 04:25:32 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:30.044 04:25:33 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:30.304 [2024-12-07 04:25:33.344748] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:30.304 [2024-12-07 04:25:33.345210] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.304 04:25:33 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:30.563 malloc0 00:10:30.563 04:25:33 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:30.823 04:25:33 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:30.823 04:25:34 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:43.029 Initializing NVMe Controllers 00:10:43.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.030 Initialization complete. Launching workers. 00:10:43.030 ======================================================== 00:10:43.030 Latency(us) 00:10:43.030 Device Information : IOPS MiB/s Average min max 00:10:43.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11290.99 44.11 5669.96 923.91 9029.17 00:10:43.030 ======================================================== 00:10:43.030 Total : 11290.99 44.11 5669.96 923.91 9029.17 00:10:43.030 00:10:43.030 04:25:44 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:43.030 04:25:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:43.030 04:25:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:43.030 04:25:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:43.030 04:25:44 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:43.030 04:25:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:43.030 04:25:44 -- target/tls.sh@28 -- # bdevperf_pid=64796 00:10:43.030 04:25:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:43.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:43.030 04:25:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:43.030 04:25:44 -- target/tls.sh@31 -- # waitforlisten 64796 /var/tmp/bdevperf.sock 00:10:43.030 04:25:44 -- common/autotest_common.sh@829 -- # '[' -z 64796 ']' 00:10:43.030 04:25:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:43.030 04:25:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.030 04:25:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:43.030 04:25:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.030 04:25:44 -- common/autotest_common.sh@10 -- # set +x 00:10:43.030 [2024-12-07 04:25:44.311455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:43.030 [2024-12-07 04:25:44.311561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64796 ] 00:10:43.030 [2024-12-07 04:25:44.443206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.030 [2024-12-07 04:25:44.495146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.030 04:25:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.030 04:25:45 -- common/autotest_common.sh@862 -- # return 0 00:10:43.030 04:25:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:43.030 [2024-12-07 04:25:45.472509] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:43.030 TLSTESTn1 00:10:43.030 04:25:45 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:43.030 Running I/O for 10 seconds... 00:10:53.047 00:10:53.047 Latency(us) 00:10:53.047 [2024-12-07T04:25:56.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.047 [2024-12-07T04:25:56.287Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:53.047 Verification LBA range: start 0x0 length 0x2000 00:10:53.047 TLSTESTn1 : 10.01 6330.45 24.73 0.00 0.00 20186.94 4259.84 27405.96 00:10:53.047 [2024-12-07T04:25:56.287Z] =================================================================================================================== 00:10:53.047 [2024-12-07T04:25:56.287Z] Total : 6330.45 24.73 0.00 0.00 20186.94 4259.84 27405.96 00:10:53.047 0 00:10:53.047 04:25:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.047 04:25:55 -- target/tls.sh@45 -- # killprocess 64796 00:10:53.047 04:25:55 -- common/autotest_common.sh@936 -- # '[' -z 64796 ']' 00:10:53.047 04:25:55 -- common/autotest_common.sh@940 -- # kill -0 64796 00:10:53.047 04:25:55 -- common/autotest_common.sh@941 -- # uname 00:10:53.047 04:25:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.047 04:25:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64796 00:10:53.047 killing process with pid 64796 00:10:53.047 Received shutdown signal, test time was about 10.000000 seconds 00:10:53.047 00:10:53.047 Latency(us) 00:10:53.047 [2024-12-07T04:25:56.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.047 [2024-12-07T04:25:56.287Z] =================================================================================================================== 00:10:53.047 [2024-12-07T04:25:56.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:53.047 04:25:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:53.047 04:25:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:53.047 04:25:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64796' 00:10:53.047 04:25:55 -- common/autotest_common.sh@955 -- # kill 64796 00:10:53.047 04:25:55 -- common/autotest_common.sh@960 -- # wait 64796 00:10:53.047 04:25:55 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:53.048 04:25:55 -- common/autotest_common.sh@650 -- # local es=0 00:10:53.048 04:25:55 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:53.048 04:25:55 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:53.048 04:25:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.048 04:25:55 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:53.048 04:25:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.048 04:25:55 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:53.048 04:25:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:53.048 04:25:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:53.048 04:25:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:53.048 04:25:55 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:10:53.048 04:25:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:53.048 04:25:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:53.048 04:25:55 -- target/tls.sh@28 -- # bdevperf_pid=64930 00:10:53.048 04:25:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:53.048 04:25:55 -- target/tls.sh@31 -- # waitforlisten 64930 /var/tmp/bdevperf.sock 00:10:53.048 04:25:55 -- common/autotest_common.sh@829 -- # '[' -z 64930 ']' 00:10:53.048 04:25:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:53.048 04:25:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.048 04:25:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:53.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:53.048 04:25:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.048 04:25:55 -- common/autotest_common.sh@10 -- # set +x 00:10:53.048 [2024-12-07 04:25:55.978433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:53.048 [2024-12-07 04:25:55.978691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64930 ] 00:10:53.048 [2024-12-07 04:25:56.108488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.048 [2024-12-07 04:25:56.158415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.048 04:25:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.048 04:25:56 -- common/autotest_common.sh@862 -- # return 0 00:10:53.048 04:25:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:53.307 [2024-12-07 04:25:56.437759] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:53.307 [2024-12-07 04:25:56.447064] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:53.307 [2024-12-07 04:25:56.447364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1f650 (107): Transport endpoint is not connected 00:10:53.307 [2024-12-07 04:25:56.448356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1f650 (9): Bad file descriptor 00:10:53.307 [2024-12-07 04:25:56.449360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:53.307 [2024-12-07 04:25:56.449513] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:53.307 [2024-12-07 04:25:56.449624] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:53.307 request: 00:10:53.307 { 00:10:53.307 "name": "TLSTEST", 00:10:53.307 "trtype": "tcp", 00:10:53.307 "traddr": "10.0.0.2", 00:10:53.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.307 "adrfam": "ipv4", 00:10:53.307 "trsvcid": "4420", 00:10:53.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.307 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:10:53.307 "method": "bdev_nvme_attach_controller", 00:10:53.307 "req_id": 1 00:10:53.307 } 00:10:53.307 Got JSON-RPC error response 00:10:53.307 response: 00:10:53.307 { 00:10:53.307 "code": -32602, 00:10:53.307 "message": "Invalid parameters" 00:10:53.307 } 00:10:53.307 04:25:56 -- target/tls.sh@36 -- # killprocess 64930 00:10:53.307 04:25:56 -- common/autotest_common.sh@936 -- # '[' -z 64930 ']' 00:10:53.307 04:25:56 -- common/autotest_common.sh@940 -- # kill -0 64930 00:10:53.307 04:25:56 -- common/autotest_common.sh@941 -- # uname 00:10:53.307 04:25:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.307 04:25:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64930 00:10:53.307 killing process with pid 64930 00:10:53.307 Received shutdown signal, test time was about 10.000000 seconds 00:10:53.307 00:10:53.307 Latency(us) 00:10:53.307 [2024-12-07T04:25:56.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.307 [2024-12-07T04:25:56.547Z] =================================================================================================================== 00:10:53.307 [2024-12-07T04:25:56.547Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:53.307 04:25:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:53.307 04:25:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:53.307 04:25:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64930' 00:10:53.307 04:25:56 -- common/autotest_common.sh@955 -- # kill 64930 00:10:53.307 04:25:56 -- common/autotest_common.sh@960 -- # wait 64930 00:10:53.567 04:25:56 -- target/tls.sh@37 -- # return 1 00:10:53.567 04:25:56 -- common/autotest_common.sh@653 -- # es=1 00:10:53.567 04:25:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:53.567 04:25:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:53.567 04:25:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:53.567 04:25:56 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:53.567 04:25:56 -- common/autotest_common.sh@650 -- # local es=0 00:10:53.567 04:25:56 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:53.567 04:25:56 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:53.567 04:25:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.567 04:25:56 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:53.567 04:25:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.567 04:25:56 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:53.567 04:25:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:53.567 04:25:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:53.567 04:25:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:10:53.567 04:25:56 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:53.567 04:25:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:53.567 04:25:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:53.567 04:25:56 -- target/tls.sh@28 -- # bdevperf_pid=64944 00:10:53.567 04:25:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:53.567 04:25:56 -- target/tls.sh@31 -- # waitforlisten 64944 /var/tmp/bdevperf.sock 00:10:53.567 04:25:56 -- common/autotest_common.sh@829 -- # '[' -z 64944 ']' 00:10:53.567 04:25:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:53.567 04:25:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.567 04:25:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:53.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:53.567 04:25:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.567 04:25:56 -- common/autotest_common.sh@10 -- # set +x 00:10:53.567 [2024-12-07 04:25:56.707346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:53.567 [2024-12-07 04:25:56.707636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64944 ] 00:10:53.826 [2024-12-07 04:25:56.841627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.826 [2024-12-07 04:25:56.893097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.765 04:25:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.765 04:25:57 -- common/autotest_common.sh@862 -- # return 0 00:10:54.765 04:25:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:54.765 [2024-12-07 04:25:57.944213] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:54.765 [2024-12-07 04:25:57.949048] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:54.765 [2024-12-07 04:25:57.949278] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:54.765 [2024-12-07 04:25:57.949338] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:54.765 [2024-12-07 04:25:57.949800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdba650 (107): Transport endpoint is not connected 00:10:54.765 [2024-12-07 04:25:57.950785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdba650 (9): Bad file descriptor 00:10:54.765 [2024-12-07 04:25:57.951782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:54.765 [2024-12-07 04:25:57.951820] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:54.765 [2024-12-07 04:25:57.951830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:54.765 request: 00:10:54.765 { 00:10:54.765 "name": "TLSTEST", 00:10:54.765 "trtype": "tcp", 00:10:54.765 "traddr": "10.0.0.2", 00:10:54.765 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:10:54.765 "adrfam": "ipv4", 00:10:54.765 "trsvcid": "4420", 00:10:54.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.765 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:54.765 "method": "bdev_nvme_attach_controller", 00:10:54.765 "req_id": 1 00:10:54.765 } 00:10:54.765 Got JSON-RPC error response 00:10:54.765 response: 00:10:54.765 { 00:10:54.765 "code": -32602, 00:10:54.765 "message": "Invalid parameters" 00:10:54.765 } 00:10:54.765 04:25:57 -- target/tls.sh@36 -- # killprocess 64944 00:10:54.765 04:25:57 -- common/autotest_common.sh@936 -- # '[' -z 64944 ']' 00:10:54.765 04:25:57 -- common/autotest_common.sh@940 -- # kill -0 64944 00:10:54.765 04:25:57 -- common/autotest_common.sh@941 -- # uname 00:10:54.765 04:25:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:54.765 04:25:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64944 00:10:55.025 killing process with pid 64944 00:10:55.025 Received shutdown signal, test time was about 10.000000 seconds 00:10:55.025 00:10:55.025 Latency(us) 00:10:55.025 [2024-12-07T04:25:58.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.025 [2024-12-07T04:25:58.265Z] =================================================================================================================== 00:10:55.025 [2024-12-07T04:25:58.265Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:55.025 04:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:55.025 04:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:55.025 04:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64944' 00:10:55.025 04:25:58 -- common/autotest_common.sh@955 -- # kill 64944 00:10:55.025 04:25:58 -- common/autotest_common.sh@960 -- # wait 64944 00:10:55.025 04:25:58 -- target/tls.sh@37 -- # return 1 00:10:55.025 04:25:58 -- common/autotest_common.sh@653 -- # es=1 00:10:55.025 04:25:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:55.025 04:25:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:55.025 04:25:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:55.025 04:25:58 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:55.025 04:25:58 -- common/autotest_common.sh@650 -- # local es=0 00:10:55.025 04:25:58 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:55.025 04:25:58 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:55.025 04:25:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:55.025 04:25:58 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:55.025 04:25:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:55.025 04:25:58 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:55.025 04:25:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:55.025 04:25:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:10:55.025 04:25:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:55.025 04:25:58 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:55.025 04:25:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:55.025 04:25:58 -- target/tls.sh@28 -- # bdevperf_pid=64972 00:10:55.025 04:25:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:55.025 04:25:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:55.025 04:25:58 -- target/tls.sh@31 -- # waitforlisten 64972 /var/tmp/bdevperf.sock 00:10:55.025 04:25:58 -- common/autotest_common.sh@829 -- # '[' -z 64972 ']' 00:10:55.025 04:25:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.025 04:25:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.025 04:25:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.025 04:25:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.025 04:25:58 -- common/autotest_common.sh@10 -- # set +x 00:10:55.025 [2024-12-07 04:25:58.229041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:55.025 [2024-12-07 04:25:58.229301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64972 ] 00:10:55.284 [2024-12-07 04:25:58.366277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.284 [2024-12-07 04:25:58.417755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.221 04:25:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.221 04:25:59 -- common/autotest_common.sh@862 -- # return 0 00:10:56.221 04:25:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:56.221 [2024-12-07 04:25:59.382486] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:56.221 [2024-12-07 04:25:59.387257] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:56.221 [2024-12-07 04:25:59.387495] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:56.221 [2024-12-07 04:25:59.387777] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:56.221 [2024-12-07 04:25:59.387984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ee650 (107): Transport endpoint is not connected 00:10:56.221 [2024-12-07 04:25:59.388971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ee650 (9): Bad file descriptor 00:10:56.221 [2024-12-07 04:25:59.389984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:10:56.221 [2024-12-07 04:25:59.390145] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:56.221 [2024-12-07 04:25:59.390206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:10:56.221 request: 00:10:56.221 { 00:10:56.221 "name": "TLSTEST", 00:10:56.221 "trtype": "tcp", 00:10:56.221 "traddr": "10.0.0.2", 00:10:56.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.221 "adrfam": "ipv4", 00:10:56.221 "trsvcid": "4420", 00:10:56.221 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:10:56.221 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:56.221 "method": "bdev_nvme_attach_controller", 00:10:56.221 "req_id": 1 00:10:56.221 } 00:10:56.221 Got JSON-RPC error response 00:10:56.221 response: 00:10:56.221 { 00:10:56.221 "code": -32602, 00:10:56.221 "message": "Invalid parameters" 00:10:56.221 } 00:10:56.221 04:25:59 -- target/tls.sh@36 -- # killprocess 64972 00:10:56.221 04:25:59 -- common/autotest_common.sh@936 -- # '[' -z 64972 ']' 00:10:56.221 04:25:59 -- common/autotest_common.sh@940 -- # kill -0 64972 00:10:56.221 04:25:59 -- common/autotest_common.sh@941 -- # uname 00:10:56.221 04:25:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:56.221 04:25:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64972 00:10:56.221 killing process with pid 64972 00:10:56.221 Received shutdown signal, test time was about 10.000000 seconds 00:10:56.221 00:10:56.221 Latency(us) 00:10:56.221 [2024-12-07T04:25:59.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.221 [2024-12-07T04:25:59.461Z] =================================================================================================================== 00:10:56.221 [2024-12-07T04:25:59.461Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:56.221 04:25:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:56.221 04:25:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:56.221 04:25:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64972' 00:10:56.221 04:25:59 -- common/autotest_common.sh@955 -- # kill 64972 00:10:56.221 04:25:59 -- common/autotest_common.sh@960 -- # wait 64972 00:10:56.480 04:25:59 -- target/tls.sh@37 -- # return 1 00:10:56.480 04:25:59 -- common/autotest_common.sh@653 -- # es=1 00:10:56.480 04:25:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:56.480 04:25:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:56.480 04:25:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:56.480 04:25:59 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:56.480 04:25:59 -- common/autotest_common.sh@650 -- # local es=0 00:10:56.480 04:25:59 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:56.480 04:25:59 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:56.480 04:25:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.480 04:25:59 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:56.480 04:25:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.480 04:25:59 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:56.480 04:25:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:56.480 04:25:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:56.480 04:25:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:56.480 04:25:59 -- target/tls.sh@23 -- # psk= 00:10:56.480 04:25:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:56.480 04:25:59 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:56.480 04:25:59 -- target/tls.sh@28 -- # bdevperf_pid=64999 00:10:56.480 04:25:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:56.480 04:25:59 -- target/tls.sh@31 -- # waitforlisten 64999 /var/tmp/bdevperf.sock 00:10:56.480 04:25:59 -- common/autotest_common.sh@829 -- # '[' -z 64999 ']' 00:10:56.480 04:25:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:56.480 04:25:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.480 04:25:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:56.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:56.480 04:25:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.480 04:25:59 -- common/autotest_common.sh@10 -- # set +x 00:10:56.480 [2024-12-07 04:25:59.648863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:56.480 [2024-12-07 04:25:59.649108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64999 ] 00:10:56.740 [2024-12-07 04:25:59.780282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.740 [2024-12-07 04:25:59.830583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.677 04:26:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.677 04:26:00 -- common/autotest_common.sh@862 -- # return 0 00:10:57.677 04:26:00 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:10:57.677 [2024-12-07 04:26:00.817487] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:57.677 [2024-12-07 04:26:00.818885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142f010 (9): Bad file descriptor 00:10:57.677 [2024-12-07 04:26:00.819881] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:57.677 [2024-12-07 04:26:00.819906] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:57.677 [2024-12-07 04:26:00.819917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:57.677 request: 00:10:57.677 { 00:10:57.677 "name": "TLSTEST", 00:10:57.677 "trtype": "tcp", 00:10:57.677 "traddr": "10.0.0.2", 00:10:57.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:57.677 "adrfam": "ipv4", 00:10:57.677 "trsvcid": "4420", 00:10:57.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.677 "method": "bdev_nvme_attach_controller", 00:10:57.677 "req_id": 1 00:10:57.677 } 00:10:57.677 Got JSON-RPC error response 00:10:57.677 response: 00:10:57.677 { 00:10:57.677 "code": -32602, 00:10:57.677 "message": "Invalid parameters" 00:10:57.677 } 00:10:57.677 04:26:00 -- target/tls.sh@36 -- # killprocess 64999 00:10:57.677 04:26:00 -- common/autotest_common.sh@936 -- # '[' -z 64999 ']' 00:10:57.677 04:26:00 -- common/autotest_common.sh@940 -- # kill -0 64999 00:10:57.677 04:26:00 -- common/autotest_common.sh@941 -- # uname 00:10:57.677 04:26:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:57.677 04:26:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64999 00:10:57.677 04:26:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:57.677 04:26:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:57.677 04:26:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64999' 00:10:57.677 killing process with pid 64999 00:10:57.677 Received shutdown signal, test time was about 10.000000 seconds 00:10:57.677 00:10:57.677 Latency(us) 00:10:57.677 [2024-12-07T04:26:00.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.677 [2024-12-07T04:26:00.917Z] =================================================================================================================== 00:10:57.677 [2024-12-07T04:26:00.917Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:57.677 04:26:00 -- common/autotest_common.sh@955 -- # kill 64999 00:10:57.677 04:26:00 -- common/autotest_common.sh@960 -- # wait 64999 00:10:57.935 04:26:01 -- target/tls.sh@37 -- # return 1 00:10:57.935 04:26:01 -- common/autotest_common.sh@653 -- # es=1 00:10:57.935 04:26:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:57.935 04:26:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:57.935 04:26:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:57.935 04:26:01 -- target/tls.sh@167 -- # killprocess 64554 00:10:57.935 04:26:01 -- common/autotest_common.sh@936 -- # '[' -z 64554 ']' 00:10:57.935 04:26:01 -- common/autotest_common.sh@940 -- # kill -0 64554 00:10:57.935 04:26:01 -- common/autotest_common.sh@941 -- # uname 00:10:57.935 04:26:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:57.935 04:26:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64554 00:10:57.935 04:26:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:57.935 04:26:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:57.935 killing process with pid 64554 00:10:57.935 04:26:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64554' 00:10:57.935 04:26:01 -- common/autotest_common.sh@955 -- # kill 64554 00:10:57.935 04:26:01 -- common/autotest_common.sh@960 -- # wait 64554 00:10:58.193 04:26:01 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:10:58.193 04:26:01 -- target/tls.sh@49 -- # local key hash crc 00:10:58.193 04:26:01 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:10:58.193 04:26:01 -- target/tls.sh@51 -- # hash=02 00:10:58.193 04:26:01 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:10:58.194 04:26:01 -- target/tls.sh@52 -- # gzip -1 -c 00:10:58.194 04:26:01 -- target/tls.sh@52 -- # tail -c8 00:10:58.194 04:26:01 -- target/tls.sh@52 -- # head -c 4 00:10:58.194 04:26:01 -- target/tls.sh@52 -- # crc='�e�'\''' 00:10:58.194 04:26:01 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:58.194 04:26:01 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:10:58.194 04:26:01 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:58.194 04:26:01 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:58.194 04:26:01 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:58.194 04:26:01 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:58.194 04:26:01 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:58.194 04:26:01 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:10:58.194 04:26:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:58.194 04:26:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:58.194 04:26:01 -- common/autotest_common.sh@10 -- # set +x 00:10:58.194 04:26:01 -- nvmf/common.sh@469 -- # nvmfpid=65042 00:10:58.194 04:26:01 -- nvmf/common.sh@470 -- # waitforlisten 65042 00:10:58.194 04:26:01 -- common/autotest_common.sh@829 -- # '[' -z 65042 ']' 00:10:58.194 04:26:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:58.194 04:26:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.194 04:26:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.194 04:26:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.194 04:26:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.194 04:26:01 -- common/autotest_common.sh@10 -- # set +x 00:10:58.194 [2024-12-07 04:26:01.334160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:58.194 [2024-12-07 04:26:01.334245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.452 [2024-12-07 04:26:01.463520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.452 [2024-12-07 04:26:01.511763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:58.452 [2024-12-07 04:26:01.511904] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.452 [2024-12-07 04:26:01.511916] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.452 [2024-12-07 04:26:01.511925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.452 [2024-12-07 04:26:01.511963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.386 04:26:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.386 04:26:02 -- common/autotest_common.sh@862 -- # return 0 00:10:59.386 04:26:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:59.386 04:26:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.386 04:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:59.386 04:26:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.386 04:26:02 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:59.386 04:26:02 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:59.386 04:26:02 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:59.644 [2024-12-07 04:26:02.649718] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.644 04:26:02 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:59.902 04:26:02 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:00.159 [2024-12-07 04:26:03.205831] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:00.159 [2024-12-07 04:26:03.206085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.159 04:26:03 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:00.417 malloc0 00:11:00.418 04:26:03 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:00.675 04:26:03 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:00.934 04:26:03 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:00.934 04:26:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:00.934 04:26:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:00.934 04:26:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:00.934 04:26:03 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:00.934 04:26:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:00.934 04:26:03 -- target/tls.sh@28 -- # bdevperf_pid=65096 00:11:00.934 04:26:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:00.934 04:26:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:00.934 04:26:03 -- target/tls.sh@31 -- # waitforlisten 65096 /var/tmp/bdevperf.sock 00:11:00.934 04:26:03 -- common/autotest_common.sh@829 -- # '[' -z 65096 ']' 00:11:00.934 04:26:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:00.934 04:26:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.934 04:26:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:00.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:00.934 04:26:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.934 04:26:03 -- common/autotest_common.sh@10 -- # set +x 00:11:00.934 [2024-12-07 04:26:03.978060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:00.934 [2024-12-07 04:26:03.978342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65096 ] 00:11:00.934 [2024-12-07 04:26:04.108373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.934 [2024-12-07 04:26:04.158382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.867 04:26:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.867 04:26:04 -- common/autotest_common.sh@862 -- # return 0 00:11:01.867 04:26:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:02.125 [2024-12-07 04:26:05.195266] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:02.125 TLSTESTn1 00:11:02.125 04:26:05 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:02.383 Running I/O for 10 seconds... 00:11:12.408 00:11:12.408 Latency(us) 00:11:12.408 [2024-12-07T04:26:15.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.408 [2024-12-07T04:26:15.648Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:12.408 Verification LBA range: start 0x0 length 0x2000 00:11:12.408 TLSTESTn1 : 10.02 5992.08 23.41 0.00 0.00 21326.03 4081.11 27882.59 00:11:12.408 [2024-12-07T04:26:15.648Z] =================================================================================================================== 00:11:12.408 [2024-12-07T04:26:15.648Z] Total : 5992.08 23.41 0.00 0.00 21326.03 4081.11 27882.59 00:11:12.408 0 00:11:12.408 04:26:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:12.408 04:26:15 -- target/tls.sh@45 -- # killprocess 65096 00:11:12.408 04:26:15 -- common/autotest_common.sh@936 -- # '[' -z 65096 ']' 00:11:12.408 04:26:15 -- common/autotest_common.sh@940 -- # kill -0 65096 00:11:12.408 04:26:15 -- common/autotest_common.sh@941 -- # uname 00:11:12.408 04:26:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:12.408 04:26:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65096 00:11:12.408 killing process with pid 65096 00:11:12.408 Received shutdown signal, test time was about 10.000000 seconds 00:11:12.408 00:11:12.408 Latency(us) 00:11:12.408 [2024-12-07T04:26:15.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.408 [2024-12-07T04:26:15.648Z] =================================================================================================================== 00:11:12.408 [2024-12-07T04:26:15.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:12.408 04:26:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:12.408 04:26:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:12.408 04:26:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65096' 00:11:12.408 04:26:15 -- common/autotest_common.sh@955 -- # kill 65096 00:11:12.408 04:26:15 -- common/autotest_common.sh@960 -- # wait 65096 00:11:12.408 04:26:15 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.408 04:26:15 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.408 04:26:15 -- common/autotest_common.sh@650 -- # local es=0 00:11:12.408 04:26:15 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.408 04:26:15 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:12.408 04:26:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.408 04:26:15 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:12.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:12.408 04:26:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:12.408 04:26:15 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.408 04:26:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:12.408 04:26:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:12.408 04:26:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:12.408 04:26:15 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:12.408 04:26:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:12.408 04:26:15 -- target/tls.sh@28 -- # bdevperf_pid=65231 00:11:12.408 04:26:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:12.408 04:26:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:12.408 04:26:15 -- target/tls.sh@31 -- # waitforlisten 65231 /var/tmp/bdevperf.sock 00:11:12.408 04:26:15 -- common/autotest_common.sh@829 -- # '[' -z 65231 ']' 00:11:12.408 04:26:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:12.408 04:26:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.408 04:26:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:12.408 04:26:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.408 04:26:15 -- common/autotest_common.sh@10 -- # set +x 00:11:12.667 [2024-12-07 04:26:15.678929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:12.667 [2024-12-07 04:26:15.679167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65231 ] 00:11:12.667 [2024-12-07 04:26:15.809041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.667 [2024-12-07 04:26:15.859231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.601 04:26:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.601 04:26:16 -- common/autotest_common.sh@862 -- # return 0 00:11:13.601 04:26:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:13.859 [2024-12-07 04:26:16.922790] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:13.859 [2024-12-07 04:26:16.923110] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:13.859 request: 00:11:13.859 { 00:11:13.859 "name": "TLSTEST", 00:11:13.859 "trtype": "tcp", 00:11:13.859 "traddr": "10.0.0.2", 00:11:13.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.859 "adrfam": "ipv4", 00:11:13.859 "trsvcid": "4420", 00:11:13.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.859 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:13.859 "method": "bdev_nvme_attach_controller", 00:11:13.859 "req_id": 1 00:11:13.859 } 00:11:13.859 Got JSON-RPC error response 00:11:13.859 response: 00:11:13.859 { 00:11:13.859 "code": -22, 00:11:13.859 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:13.859 } 00:11:13.859 04:26:16 -- target/tls.sh@36 -- # killprocess 65231 00:11:13.859 04:26:16 -- common/autotest_common.sh@936 -- # '[' -z 65231 ']' 00:11:13.860 04:26:16 -- common/autotest_common.sh@940 -- # kill -0 65231 00:11:13.860 04:26:16 -- common/autotest_common.sh@941 -- # uname 00:11:13.860 04:26:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:13.860 04:26:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65231 00:11:13.860 04:26:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:13.860 killing process with pid 65231 00:11:13.860 04:26:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:13.860 04:26:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65231' 00:11:13.860 Received shutdown signal, test time was about 10.000000 seconds 00:11:13.860 00:11:13.860 Latency(us) 00:11:13.860 [2024-12-07T04:26:17.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.860 [2024-12-07T04:26:17.100Z] =================================================================================================================== 00:11:13.860 [2024-12-07T04:26:17.100Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:13.860 04:26:16 -- common/autotest_common.sh@955 -- # kill 65231 00:11:13.860 04:26:16 -- common/autotest_common.sh@960 -- # wait 65231 00:11:14.118 04:26:17 -- target/tls.sh@37 -- # return 1 00:11:14.118 04:26:17 -- common/autotest_common.sh@653 -- # es=1 00:11:14.118 04:26:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:14.118 04:26:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:14.118 04:26:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:14.118 04:26:17 -- target/tls.sh@183 -- # killprocess 65042 00:11:14.118 04:26:17 -- common/autotest_common.sh@936 -- # '[' -z 65042 ']' 00:11:14.118 04:26:17 -- common/autotest_common.sh@940 -- # kill -0 65042 00:11:14.118 04:26:17 -- common/autotest_common.sh@941 -- # uname 00:11:14.118 04:26:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.118 04:26:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65042 00:11:14.118 killing process with pid 65042 00:11:14.118 04:26:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:14.118 04:26:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:14.118 04:26:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65042' 00:11:14.118 04:26:17 -- common/autotest_common.sh@955 -- # kill 65042 00:11:14.118 04:26:17 -- common/autotest_common.sh@960 -- # wait 65042 00:11:14.118 04:26:17 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:14.118 04:26:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:14.118 04:26:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:14.118 04:26:17 -- common/autotest_common.sh@10 -- # set +x 00:11:14.377 04:26:17 -- nvmf/common.sh@469 -- # nvmfpid=65268 00:11:14.377 04:26:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:14.377 04:26:17 -- nvmf/common.sh@470 -- # waitforlisten 65268 00:11:14.377 04:26:17 -- common/autotest_common.sh@829 -- # '[' -z 65268 ']' 00:11:14.377 04:26:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.377 04:26:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.377 04:26:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.377 04:26:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.377 04:26:17 -- common/autotest_common.sh@10 -- # set +x 00:11:14.377 [2024-12-07 04:26:17.422406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:14.377 [2024-12-07 04:26:17.422699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.377 [2024-12-07 04:26:17.561178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.377 [2024-12-07 04:26:17.610497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:14.377 [2024-12-07 04:26:17.610952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.377 [2024-12-07 04:26:17.610977] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.377 [2024-12-07 04:26:17.610987] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.377 [2024-12-07 04:26:17.611021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.316 04:26:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.316 04:26:18 -- common/autotest_common.sh@862 -- # return 0 00:11:15.316 04:26:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:15.316 04:26:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:15.316 04:26:18 -- common/autotest_common.sh@10 -- # set +x 00:11:15.316 04:26:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.316 04:26:18 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:15.316 04:26:18 -- common/autotest_common.sh@650 -- # local es=0 00:11:15.316 04:26:18 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:15.316 04:26:18 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:15.316 04:26:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.316 04:26:18 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:15.316 04:26:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:15.316 04:26:18 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:15.316 04:26:18 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:15.316 04:26:18 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:15.575 [2024-12-07 04:26:18.641591] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.575 04:26:18 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:15.834 04:26:18 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:16.093 [2024-12-07 04:26:19.233744] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:16.093 [2024-12-07 04:26:19.233951] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.093 04:26:19 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:16.352 malloc0 00:11:16.352 04:26:19 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:16.609 04:26:19 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:16.867 [2024-12-07 04:26:20.011837] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:16.868 [2024-12-07 04:26:20.012121] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:16.868 [2024-12-07 04:26:20.012167] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:16.868 request: 00:11:16.868 { 00:11:16.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.868 "host": "nqn.2016-06.io.spdk:host1", 00:11:16.868 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:16.868 "method": "nvmf_subsystem_add_host", 00:11:16.868 "req_id": 1 00:11:16.868 } 00:11:16.868 Got JSON-RPC error response 00:11:16.868 response: 00:11:16.868 { 00:11:16.868 "code": -32603, 00:11:16.868 "message": "Internal error" 00:11:16.868 } 00:11:16.868 04:26:20 -- common/autotest_common.sh@653 -- # es=1 00:11:16.868 04:26:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.868 04:26:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.868 04:26:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.868 04:26:20 -- target/tls.sh@189 -- # killprocess 65268 00:11:16.868 04:26:20 -- common/autotest_common.sh@936 -- # '[' -z 65268 ']' 00:11:16.868 04:26:20 -- common/autotest_common.sh@940 -- # kill -0 65268 00:11:16.868 04:26:20 -- common/autotest_common.sh@941 -- # uname 00:11:16.868 04:26:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:16.868 04:26:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65268 00:11:16.868 killing process with pid 65268 00:11:16.868 04:26:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:16.868 04:26:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:16.868 04:26:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65268' 00:11:16.868 04:26:20 -- common/autotest_common.sh@955 -- # kill 65268 00:11:16.868 04:26:20 -- common/autotest_common.sh@960 -- # wait 65268 00:11:17.126 04:26:20 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:17.126 04:26:20 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:17.126 04:26:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:17.126 04:26:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:17.126 04:26:20 -- common/autotest_common.sh@10 -- # set +x 00:11:17.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.126 04:26:20 -- nvmf/common.sh@469 -- # nvmfpid=65326 00:11:17.126 04:26:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:17.126 04:26:20 -- nvmf/common.sh@470 -- # waitforlisten 65326 00:11:17.126 04:26:20 -- common/autotest_common.sh@829 -- # '[' -z 65326 ']' 00:11:17.126 04:26:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.126 04:26:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.126 04:26:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.126 04:26:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.126 04:26:20 -- common/autotest_common.sh@10 -- # set +x 00:11:17.126 [2024-12-07 04:26:20.300994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:17.126 [2024-12-07 04:26:20.301283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.384 [2024-12-07 04:26:20.440833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.384 [2024-12-07 04:26:20.490444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:17.384 [2024-12-07 04:26:20.490874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.384 [2024-12-07 04:26:20.491031] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.384 [2024-12-07 04:26:20.491157] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.384 [2024-12-07 04:26:20.491194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.320 04:26:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.320 04:26:21 -- common/autotest_common.sh@862 -- # return 0 00:11:18.320 04:26:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:18.320 04:26:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:18.320 04:26:21 -- common/autotest_common.sh@10 -- # set +x 00:11:18.320 04:26:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.320 04:26:21 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:18.320 04:26:21 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:18.320 04:26:21 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:18.320 [2024-12-07 04:26:21.449899] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.320 04:26:21 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:18.578 04:26:21 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:18.836 [2024-12-07 04:26:22.021998] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:18.836 [2024-12-07 04:26:22.022217] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.836 04:26:22 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:19.093 malloc0 00:11:19.093 04:26:22 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:19.351 04:26:22 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:19.609 04:26:22 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:19.609 04:26:22 -- target/tls.sh@197 -- # bdevperf_pid=65386 00:11:19.609 04:26:22 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:19.609 04:26:22 -- target/tls.sh@200 -- # waitforlisten 65386 /var/tmp/bdevperf.sock 00:11:19.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:19.609 04:26:22 -- common/autotest_common.sh@829 -- # '[' -z 65386 ']' 00:11:19.609 04:26:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:19.609 04:26:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.609 04:26:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:19.609 04:26:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.609 04:26:22 -- common/autotest_common.sh@10 -- # set +x 00:11:19.609 [2024-12-07 04:26:22.728052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:19.609 [2024-12-07 04:26:22.728345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65386 ] 00:11:19.867 [2024-12-07 04:26:22.866730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.867 [2024-12-07 04:26:22.934581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.802 04:26:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.802 04:26:23 -- common/autotest_common.sh@862 -- # return 0 00:11:20.802 04:26:23 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:20.802 [2024-12-07 04:26:23.932040] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:20.802 TLSTESTn1 00:11:20.802 04:26:24 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:21.369 04:26:24 -- target/tls.sh@205 -- # tgtconf='{ 00:11:21.369 "subsystems": [ 00:11:21.369 { 00:11:21.369 "subsystem": "iobuf", 00:11:21.369 "config": [ 00:11:21.369 { 00:11:21.369 "method": "iobuf_set_options", 00:11:21.369 "params": { 00:11:21.369 "small_pool_count": 8192, 00:11:21.369 "large_pool_count": 1024, 00:11:21.369 "small_bufsize": 8192, 00:11:21.369 "large_bufsize": 135168 00:11:21.369 } 00:11:21.369 } 00:11:21.369 ] 00:11:21.369 }, 00:11:21.369 { 00:11:21.369 "subsystem": "sock", 00:11:21.369 "config": [ 00:11:21.369 { 00:11:21.369 "method": "sock_impl_set_options", 00:11:21.369 "params": { 00:11:21.369 "impl_name": "uring", 00:11:21.369 "recv_buf_size": 2097152, 00:11:21.369 "send_buf_size": 2097152, 00:11:21.369 "enable_recv_pipe": true, 00:11:21.369 "enable_quickack": false, 00:11:21.369 "enable_placement_id": 0, 00:11:21.369 "enable_zerocopy_send_server": false, 00:11:21.369 "enable_zerocopy_send_client": false, 00:11:21.369 "zerocopy_threshold": 0, 00:11:21.369 "tls_version": 0, 00:11:21.369 "enable_ktls": false 00:11:21.369 } 00:11:21.369 }, 00:11:21.369 { 00:11:21.369 "method": "sock_impl_set_options", 00:11:21.369 "params": { 00:11:21.369 "impl_name": "posix", 00:11:21.369 "recv_buf_size": 2097152, 00:11:21.369 "send_buf_size": 2097152, 00:11:21.369 "enable_recv_pipe": true, 00:11:21.369 "enable_quickack": false, 00:11:21.369 "enable_placement_id": 0, 00:11:21.369 "enable_zerocopy_send_server": true, 00:11:21.369 "enable_zerocopy_send_client": false, 00:11:21.369 "zerocopy_threshold": 0, 00:11:21.369 "tls_version": 0, 00:11:21.369 "enable_ktls": false 00:11:21.369 } 00:11:21.369 }, 00:11:21.369 { 00:11:21.369 "method": "sock_impl_set_options", 00:11:21.369 "params": { 00:11:21.369 "impl_name": "ssl", 00:11:21.369 "recv_buf_size": 4096, 00:11:21.369 "send_buf_size": 4096, 00:11:21.369 "enable_recv_pipe": true, 00:11:21.369 "enable_quickack": false, 00:11:21.369 "enable_placement_id": 0, 00:11:21.369 "enable_zerocopy_send_server": true, 00:11:21.369 "enable_zerocopy_send_client": false, 00:11:21.369 "zerocopy_threshold": 0, 00:11:21.369 "tls_version": 0, 00:11:21.369 "enable_ktls": false 00:11:21.369 } 00:11:21.369 } 00:11:21.369 ] 00:11:21.369 }, 00:11:21.369 { 00:11:21.369 "subsystem": "vmd", 00:11:21.370 "config": [] 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "subsystem": "accel", 00:11:21.370 "config": [ 00:11:21.370 { 00:11:21.370 "method": "accel_set_options", 00:11:21.370 "params": { 00:11:21.370 "small_cache_size": 128, 00:11:21.370 "large_cache_size": 16, 00:11:21.370 "task_count": 2048, 00:11:21.370 "sequence_count": 2048, 00:11:21.370 "buf_count": 2048 00:11:21.370 } 00:11:21.370 } 00:11:21.370 ] 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "subsystem": "bdev", 00:11:21.370 "config": [ 00:11:21.370 { 00:11:21.370 "method": "bdev_set_options", 00:11:21.370 "params": { 00:11:21.370 "bdev_io_pool_size": 65535, 00:11:21.370 "bdev_io_cache_size": 256, 00:11:21.370 "bdev_auto_examine": true, 00:11:21.370 "iobuf_small_cache_size": 128, 00:11:21.370 "iobuf_large_cache_size": 16 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "bdev_raid_set_options", 00:11:21.370 "params": { 00:11:21.370 "process_window_size_kb": 1024 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "bdev_iscsi_set_options", 00:11:21.370 "params": { 00:11:21.370 "timeout_sec": 30 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "bdev_nvme_set_options", 00:11:21.370 "params": { 00:11:21.370 "action_on_timeout": "none", 00:11:21.370 "timeout_us": 0, 00:11:21.370 "timeout_admin_us": 0, 00:11:21.370 "keep_alive_timeout_ms": 10000, 00:11:21.370 "transport_retry_count": 4, 00:11:21.370 "arbitration_burst": 0, 00:11:21.370 "low_priority_weight": 0, 00:11:21.370 "medium_priority_weight": 0, 00:11:21.370 "high_priority_weight": 0, 00:11:21.370 "nvme_adminq_poll_period_us": 10000, 00:11:21.370 "nvme_ioq_poll_period_us": 0, 00:11:21.370 "io_queue_requests": 0, 00:11:21.370 "delay_cmd_submit": true, 00:11:21.370 "bdev_retry_count": 3, 00:11:21.370 "transport_ack_timeout": 0, 00:11:21.370 "ctrlr_loss_timeout_sec": 0, 00:11:21.370 "reconnect_delay_sec": 0, 00:11:21.370 "fast_io_fail_timeout_sec": 0, 00:11:21.370 "generate_uuids": false, 00:11:21.370 "transport_tos": 0, 00:11:21.370 "io_path_stat": false, 00:11:21.370 "allow_accel_sequence": false 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "bdev_nvme_set_hotplug", 00:11:21.370 "params": { 00:11:21.370 "period_us": 100000, 00:11:21.370 "enable": false 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "bdev_malloc_create", 00:11:21.370 "params": { 00:11:21.370 "name": "malloc0", 00:11:21.370 "num_blocks": 8192, 00:11:21.370 "block_size": 4096, 00:11:21.370 "physical_block_size": 4096, 00:11:21.370 "uuid": "01580b2a-d547-45fb-8bd0-27ee33f6ab3f", 00:11:21.370 "optimal_io_boundary": 0 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "bdev_wait_for_examine" 00:11:21.370 } 00:11:21.370 ] 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "subsystem": "nbd", 00:11:21.370 "config": [] 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "subsystem": "scheduler", 00:11:21.370 "config": [ 00:11:21.370 { 00:11:21.370 "method": "framework_set_scheduler", 00:11:21.370 "params": { 00:11:21.370 "name": "static" 00:11:21.370 } 00:11:21.370 } 00:11:21.370 ] 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "subsystem": "nvmf", 00:11:21.370 "config": [ 00:11:21.370 { 00:11:21.370 "method": "nvmf_set_config", 00:11:21.370 "params": { 00:11:21.370 "discovery_filter": "match_any", 00:11:21.370 "admin_cmd_passthru": { 00:11:21.370 "identify_ctrlr": false 00:11:21.370 } 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "nvmf_set_max_subsystems", 00:11:21.370 "params": { 00:11:21.370 "max_subsystems": 1024 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "nvmf_set_crdt", 00:11:21.370 "params": { 00:11:21.370 "crdt1": 0, 00:11:21.370 "crdt2": 0, 00:11:21.370 "crdt3": 0 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "nvmf_create_transport", 00:11:21.370 "params": { 00:11:21.370 "trtype": "TCP", 00:11:21.370 "max_queue_depth": 128, 00:11:21.370 "max_io_qpairs_per_ctrlr": 127, 00:11:21.370 "in_capsule_data_size": 4096, 00:11:21.370 "max_io_size": 131072, 00:11:21.370 "io_unit_size": 131072, 00:11:21.370 "max_aq_depth": 128, 00:11:21.370 "num_shared_buffers": 511, 00:11:21.370 "buf_cache_size": 4294967295, 00:11:21.370 "dif_insert_or_strip": false, 00:11:21.370 "zcopy": false, 00:11:21.370 "c2h_success": false, 00:11:21.370 "sock_priority": 0, 00:11:21.370 "abort_timeout_sec": 1 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "nvmf_create_subsystem", 00:11:21.370 "params": { 00:11:21.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.370 "allow_any_host": false, 00:11:21.370 "serial_number": "SPDK00000000000001", 00:11:21.370 "model_number": "SPDK bdev Controller", 00:11:21.370 "max_namespaces": 10, 00:11:21.370 "min_cntlid": 1, 00:11:21.370 "max_cntlid": 65519, 00:11:21.370 "ana_reporting": false 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "nvmf_subsystem_add_host", 00:11:21.370 "params": { 00:11:21.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.370 "host": "nqn.2016-06.io.spdk:host1", 00:11:21.370 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "nvmf_subsystem_add_ns", 00:11:21.370 "params": { 00:11:21.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.370 "namespace": { 00:11:21.370 "nsid": 1, 00:11:21.370 "bdev_name": "malloc0", 00:11:21.370 "nguid": "01580B2AD54745FB8BD027EE33F6AB3F", 00:11:21.370 "uuid": "01580b2a-d547-45fb-8bd0-27ee33f6ab3f" 00:11:21.370 } 00:11:21.370 } 00:11:21.370 }, 00:11:21.370 { 00:11:21.370 "method": "nvmf_subsystem_add_listener", 00:11:21.370 "params": { 00:11:21.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.370 "listen_address": { 00:11:21.370 "trtype": "TCP", 00:11:21.370 "adrfam": "IPv4", 00:11:21.370 "traddr": "10.0.0.2", 00:11:21.370 "trsvcid": "4420" 00:11:21.370 }, 00:11:21.370 "secure_channel": true 00:11:21.370 } 00:11:21.370 } 00:11:21.370 ] 00:11:21.370 } 00:11:21.370 ] 00:11:21.370 }' 00:11:21.370 04:26:24 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:21.630 04:26:24 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:21.630 "subsystems": [ 00:11:21.630 { 00:11:21.630 "subsystem": "iobuf", 00:11:21.630 "config": [ 00:11:21.630 { 00:11:21.630 "method": "iobuf_set_options", 00:11:21.630 "params": { 00:11:21.630 "small_pool_count": 8192, 00:11:21.630 "large_pool_count": 1024, 00:11:21.630 "small_bufsize": 8192, 00:11:21.630 "large_bufsize": 135168 00:11:21.630 } 00:11:21.630 } 00:11:21.630 ] 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "subsystem": "sock", 00:11:21.630 "config": [ 00:11:21.630 { 00:11:21.630 "method": "sock_impl_set_options", 00:11:21.630 "params": { 00:11:21.630 "impl_name": "uring", 00:11:21.630 "recv_buf_size": 2097152, 00:11:21.630 "send_buf_size": 2097152, 00:11:21.630 "enable_recv_pipe": true, 00:11:21.630 "enable_quickack": false, 00:11:21.630 "enable_placement_id": 0, 00:11:21.630 "enable_zerocopy_send_server": false, 00:11:21.630 "enable_zerocopy_send_client": false, 00:11:21.630 "zerocopy_threshold": 0, 00:11:21.630 "tls_version": 0, 00:11:21.630 "enable_ktls": false 00:11:21.630 } 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "method": "sock_impl_set_options", 00:11:21.630 "params": { 00:11:21.630 "impl_name": "posix", 00:11:21.630 "recv_buf_size": 2097152, 00:11:21.630 "send_buf_size": 2097152, 00:11:21.630 "enable_recv_pipe": true, 00:11:21.630 "enable_quickack": false, 00:11:21.630 "enable_placement_id": 0, 00:11:21.630 "enable_zerocopy_send_server": true, 00:11:21.630 "enable_zerocopy_send_client": false, 00:11:21.630 "zerocopy_threshold": 0, 00:11:21.630 "tls_version": 0, 00:11:21.630 "enable_ktls": false 00:11:21.630 } 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "method": "sock_impl_set_options", 00:11:21.630 "params": { 00:11:21.630 "impl_name": "ssl", 00:11:21.630 "recv_buf_size": 4096, 00:11:21.630 "send_buf_size": 4096, 00:11:21.630 "enable_recv_pipe": true, 00:11:21.630 "enable_quickack": false, 00:11:21.630 "enable_placement_id": 0, 00:11:21.630 "enable_zerocopy_send_server": true, 00:11:21.630 "enable_zerocopy_send_client": false, 00:11:21.630 "zerocopy_threshold": 0, 00:11:21.630 "tls_version": 0, 00:11:21.630 "enable_ktls": false 00:11:21.630 } 00:11:21.630 } 00:11:21.630 ] 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "subsystem": "vmd", 00:11:21.630 "config": [] 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "subsystem": "accel", 00:11:21.630 "config": [ 00:11:21.630 { 00:11:21.630 "method": "accel_set_options", 00:11:21.630 "params": { 00:11:21.630 "small_cache_size": 128, 00:11:21.630 "large_cache_size": 16, 00:11:21.630 "task_count": 2048, 00:11:21.630 "sequence_count": 2048, 00:11:21.630 "buf_count": 2048 00:11:21.630 } 00:11:21.630 } 00:11:21.630 ] 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "subsystem": "bdev", 00:11:21.630 "config": [ 00:11:21.630 { 00:11:21.630 "method": "bdev_set_options", 00:11:21.630 "params": { 00:11:21.630 "bdev_io_pool_size": 65535, 00:11:21.630 "bdev_io_cache_size": 256, 00:11:21.630 "bdev_auto_examine": true, 00:11:21.630 "iobuf_small_cache_size": 128, 00:11:21.630 "iobuf_large_cache_size": 16 00:11:21.630 } 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "method": "bdev_raid_set_options", 00:11:21.630 "params": { 00:11:21.630 "process_window_size_kb": 1024 00:11:21.630 } 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "method": "bdev_iscsi_set_options", 00:11:21.630 "params": { 00:11:21.630 "timeout_sec": 30 00:11:21.630 } 00:11:21.630 }, 00:11:21.630 { 00:11:21.630 "method": "bdev_nvme_set_options", 00:11:21.630 "params": { 00:11:21.630 "action_on_timeout": "none", 00:11:21.630 "timeout_us": 0, 00:11:21.630 "timeout_admin_us": 0, 00:11:21.630 "keep_alive_timeout_ms": 10000, 00:11:21.631 "transport_retry_count": 4, 00:11:21.631 "arbitration_burst": 0, 00:11:21.631 "low_priority_weight": 0, 00:11:21.631 "medium_priority_weight": 0, 00:11:21.631 "high_priority_weight": 0, 00:11:21.631 "nvme_adminq_poll_period_us": 10000, 00:11:21.631 "nvme_ioq_poll_period_us": 0, 00:11:21.631 "io_queue_requests": 512, 00:11:21.631 "delay_cmd_submit": true, 00:11:21.631 "bdev_retry_count": 3, 00:11:21.631 "transport_ack_timeout": 0, 00:11:21.631 "ctrlr_loss_timeout_sec": 0, 00:11:21.631 "reconnect_delay_sec": 0, 00:11:21.631 "fast_io_fail_timeout_sec": 0, 00:11:21.631 "generate_uuids": false, 00:11:21.631 "transport_tos": 0, 00:11:21.631 "io_path_stat": false, 00:11:21.631 "allow_accel_sequence": false 00:11:21.631 } 00:11:21.631 }, 00:11:21.631 { 00:11:21.631 "method": "bdev_nvme_attach_controller", 00:11:21.631 "params": { 00:11:21.631 "name": "TLSTEST", 00:11:21.631 "trtype": "TCP", 00:11:21.631 "adrfam": "IPv4", 00:11:21.631 "traddr": "10.0.0.2", 00:11:21.631 "trsvcid": "4420", 00:11:21.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.631 "prchk_reftag": false, 00:11:21.631 "prchk_guard": false, 00:11:21.631 "ctrlr_loss_timeout_sec": 0, 00:11:21.631 "reconnect_delay_sec": 0, 00:11:21.631 "fast_io_fail_timeout_sec": 0, 00:11:21.631 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:21.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.631 "hdgst": false, 00:11:21.631 "ddgst": false 00:11:21.631 } 00:11:21.631 }, 00:11:21.631 { 00:11:21.631 "method": "bdev_nvme_set_hotplug", 00:11:21.631 "params": { 00:11:21.631 "period_us": 100000, 00:11:21.631 "enable": false 00:11:21.631 } 00:11:21.631 }, 00:11:21.631 { 00:11:21.631 "method": "bdev_wait_for_examine" 00:11:21.631 } 00:11:21.631 ] 00:11:21.631 }, 00:11:21.631 { 00:11:21.631 "subsystem": "nbd", 00:11:21.631 "config": [] 00:11:21.631 } 00:11:21.631 ] 00:11:21.631 }' 00:11:21.631 04:26:24 -- target/tls.sh@208 -- # killprocess 65386 00:11:21.631 04:26:24 -- common/autotest_common.sh@936 -- # '[' -z 65386 ']' 00:11:21.631 04:26:24 -- common/autotest_common.sh@940 -- # kill -0 65386 00:11:21.631 04:26:24 -- common/autotest_common.sh@941 -- # uname 00:11:21.631 04:26:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:21.631 04:26:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65386 00:11:21.631 killing process with pid 65386 00:11:21.631 Received shutdown signal, test time was about 10.000000 seconds 00:11:21.631 00:11:21.631 Latency(us) 00:11:21.631 [2024-12-07T04:26:24.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.631 [2024-12-07T04:26:24.871Z] =================================================================================================================== 00:11:21.631 [2024-12-07T04:26:24.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:21.631 04:26:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:21.631 04:26:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:21.631 04:26:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65386' 00:11:21.631 04:26:24 -- common/autotest_common.sh@955 -- # kill 65386 00:11:21.631 04:26:24 -- common/autotest_common.sh@960 -- # wait 65386 00:11:21.891 04:26:24 -- target/tls.sh@209 -- # killprocess 65326 00:11:21.891 04:26:24 -- common/autotest_common.sh@936 -- # '[' -z 65326 ']' 00:11:21.891 04:26:24 -- common/autotest_common.sh@940 -- # kill -0 65326 00:11:21.891 04:26:24 -- common/autotest_common.sh@941 -- # uname 00:11:21.891 04:26:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:21.891 04:26:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65326 00:11:21.891 killing process with pid 65326 00:11:21.891 04:26:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:21.891 04:26:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:21.891 04:26:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65326' 00:11:21.891 04:26:24 -- common/autotest_common.sh@955 -- # kill 65326 00:11:21.891 04:26:24 -- common/autotest_common.sh@960 -- # wait 65326 00:11:21.891 04:26:25 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:21.891 04:26:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:21.891 04:26:25 -- target/tls.sh@212 -- # echo '{ 00:11:21.891 "subsystems": [ 00:11:21.891 { 00:11:21.891 "subsystem": "iobuf", 00:11:21.891 "config": [ 00:11:21.891 { 00:11:21.891 "method": "iobuf_set_options", 00:11:21.891 "params": { 00:11:21.891 "small_pool_count": 8192, 00:11:21.891 "large_pool_count": 1024, 00:11:21.891 "small_bufsize": 8192, 00:11:21.891 "large_bufsize": 135168 00:11:21.891 } 00:11:21.891 } 00:11:21.891 ] 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "subsystem": "sock", 00:11:21.891 "config": [ 00:11:21.891 { 00:11:21.891 "method": "sock_impl_set_options", 00:11:21.891 "params": { 00:11:21.891 "impl_name": "uring", 00:11:21.891 "recv_buf_size": 2097152, 00:11:21.891 "send_buf_size": 2097152, 00:11:21.891 "enable_recv_pipe": true, 00:11:21.891 "enable_quickack": false, 00:11:21.891 "enable_placement_id": 0, 00:11:21.891 "enable_zerocopy_send_server": false, 00:11:21.891 "enable_zerocopy_send_client": false, 00:11:21.891 "zerocopy_threshold": 0, 00:11:21.891 "tls_version": 0, 00:11:21.891 "enable_ktls": false 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "sock_impl_set_options", 00:11:21.891 "params": { 00:11:21.891 "impl_name": "posix", 00:11:21.891 "recv_buf_size": 2097152, 00:11:21.891 "send_buf_size": 2097152, 00:11:21.891 "enable_recv_pipe": true, 00:11:21.891 "enable_quickack": false, 00:11:21.891 "enable_placement_id": 0, 00:11:21.891 "enable_zerocopy_send_server": true, 00:11:21.891 "enable_zerocopy_send_client": false, 00:11:21.891 "zerocopy_threshold": 0, 00:11:21.891 "tls_version": 0, 00:11:21.891 "enable_ktls": false 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "sock_impl_set_options", 00:11:21.891 "params": { 00:11:21.891 "impl_name": "ssl", 00:11:21.891 "recv_buf_size": 4096, 00:11:21.891 "send_buf_size": 4096, 00:11:21.891 "enable_recv_pipe": true, 00:11:21.891 "enable_quickack": false, 00:11:21.891 "enable_placement_id": 0, 00:11:21.891 "enable_zerocopy_send_server": true, 00:11:21.891 "enable_zerocopy_send_client": false, 00:11:21.891 "zerocopy_threshold": 0, 00:11:21.891 "tls_version": 0, 00:11:21.891 "enable_ktls": false 00:11:21.891 } 00:11:21.891 } 00:11:21.891 ] 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "subsystem": "vmd", 00:11:21.891 "config": [] 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "subsystem": "accel", 00:11:21.891 "config": [ 00:11:21.891 { 00:11:21.891 "method": "accel_set_options", 00:11:21.891 "params": { 00:11:21.891 "small_cache_size": 128, 00:11:21.891 "large_cache_size": 16, 00:11:21.891 "task_count": 2048, 00:11:21.891 "sequence_count": 2048, 00:11:21.891 "buf_count": 2048 00:11:21.891 } 00:11:21.891 } 00:11:21.891 ] 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "subsystem": "bdev", 00:11:21.891 "config": [ 00:11:21.891 { 00:11:21.891 "method": "bdev_set_options", 00:11:21.891 "params": { 00:11:21.891 "bdev_io_pool_size": 65535, 00:11:21.891 "bdev_io_cache_size": 256, 00:11:21.891 "bdev_auto_examine": true, 00:11:21.891 "iobuf_small_cache_size": 128, 00:11:21.891 "iobuf_large_cache_size": 16 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "bdev_raid_set_options", 00:11:21.891 "params": { 00:11:21.891 "process_window_size_kb": 1024 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "bdev_iscsi_set_options", 00:11:21.891 "params": { 00:11:21.891 "timeout_sec": 30 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "bdev_nvme_set_options", 00:11:21.891 "params": { 00:11:21.891 "action_on_timeout": "none", 00:11:21.891 "timeout_us": 0, 00:11:21.891 "timeout_admin_us": 0, 00:11:21.891 "keep_alive_timeout_ms": 10000, 00:11:21.891 "transport_retry_count": 4, 00:11:21.891 "arbitration_burst": 0, 00:11:21.891 "low_priority_weight": 0, 00:11:21.891 "medium_priority_weight": 0, 00:11:21.891 "high_priority_weight": 0, 00:11:21.891 "nvme_adminq_poll_period_us": 10000, 00:11:21.891 "nvme_ioq_poll_period_us": 0, 00:11:21.891 "io_queue_requests": 0, 00:11:21.891 "delay_cmd_submit": true, 00:11:21.891 "bdev_retry_count": 3, 00:11:21.891 "transport_ack_timeout": 0, 00:11:21.891 "ctrlr_loss_timeout_sec": 0, 00:11:21.891 "reconnect_delay_sec": 0, 00:11:21.891 "fast_io_fail_timeout_sec": 0, 00:11:21.891 "generate_uuids": false, 00:11:21.891 "transport_tos": 0, 00:11:21.891 "io_path_stat": false, 00:11:21.891 "allow_accel_sequence": false 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "bdev_nvme_set_hotplug", 00:11:21.891 "params": { 00:11:21.891 "period_us": 100000, 00:11:21.891 "enable": false 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "bdev_malloc_create", 00:11:21.891 "params": { 00:11:21.891 "name": "malloc0", 00:11:21.891 "num_blocks": 8192, 00:11:21.891 "block_size": 4096, 00:11:21.891 "physical_block_size": 4096, 00:11:21.891 "uuid": "01580b2a-d547-45fb-8bd0-27ee33f6ab3f", 00:11:21.891 "optimal_io_boundary": 0 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "bdev_wait_for_examine" 00:11:21.891 } 00:11:21.891 ] 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "subsystem": "nbd", 00:11:21.891 "config": [] 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "subsystem": "scheduler", 00:11:21.891 "config": [ 00:11:21.891 { 00:11:21.891 "method": "framework_set_scheduler", 00:11:21.891 "params": { 00:11:21.891 "name": "static" 00:11:21.891 } 00:11:21.891 } 00:11:21.891 ] 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "subsystem": "nvmf", 00:11:21.891 "config": [ 00:11:21.891 { 00:11:21.891 "method": "nvmf_set_config", 00:11:21.891 "params": { 00:11:21.891 "discovery_filter": "match_any", 00:11:21.891 "admin_cmd_passthru": { 00:11:21.891 "identify_ctrlr": false 00:11:21.891 } 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "nvmf_set_max_subsystems", 00:11:21.891 "params": { 00:11:21.891 "max_subsystems": 1024 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "nvmf_set_crdt", 00:11:21.891 "params": { 00:11:21.891 "crdt1": 0, 00:11:21.891 "crdt2": 0, 00:11:21.891 "crdt3": 0 00:11:21.891 } 00:11:21.891 }, 00:11:21.891 { 00:11:21.891 "method": "nvmf_create_transport", 00:11:21.891 "params": { 00:11:21.891 "trtype": "TCP", 00:11:21.891 "max_queue_depth": 128, 00:11:21.891 "max_io_qpairs_per_ctrlr": 127, 00:11:21.891 "in_capsule_data_size": 4096, 00:11:21.891 "max_io_size": 131072, 00:11:21.892 "io_unit_size": 131072, 00:11:21.892 "max_aq_depth": 128, 00:11:21.892 "num_shared_buffers": 511, 00:11:21.892 "buf_cache_size": 4294967295, 00:11:21.892 "dif_insert_or_strip": false, 00:11:21.892 "zcopy": false, 00:11:21.892 "c2h_success": false, 00:11:21.892 "sock_priority": 0, 00:11:21.892 "abort_timeout_sec": 1 00:11:21.892 } 00:11:21.892 }, 00:11:21.892 { 00:11:21.892 "method": "nvmf_create_subsystem", 00:11:21.892 "params": { 00:11:21.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.892 "allow_any_host": false, 00:11:21.892 "serial_number": "SPDK00000000000001", 00:11:21.892 "model_number": "SPDK bdev Controller", 00:11:21.892 "max_namespaces": 10, 00:11:21.892 "min_cntlid": 1, 00:11:21.892 "max_cntlid": 65519, 00:11:21.892 "ana_reporting": false 00:11:21.892 } 00:11:21.892 }, 00:11:21.892 { 00:11:21.892 "method": "nvmf_subsystem_add_host", 00:11:21.892 "params": { 00:11:21.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.892 "host": "nqn.2016-06.io.spdk:host1", 00:11:21.892 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:21.892 } 00:11:21.892 }, 00:11:21.892 { 00:11:21.892 "method": "nvmf_subsystem_add_ns", 00:11:21.892 "params": { 00:11:21.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.892 "namespace": { 00:11:21.892 "nsid": 1, 00:11:21.892 "bdev_name": "malloc0", 00:11:21.892 "nguid": "01580B2AD54745FB8BD027EE33F6AB3F", 00:11:21.892 "uuid": "01580b2a-d547-45fb-8bd0-27ee33f6ab3f" 00:11:21.892 } 00:11:21.892 } 00:11:21.892 }, 00:11:21.892 { 00:11:21.892 "method": "nvmf_subsystem_add_listener", 00:11:21.892 "params": { 00:11:21.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.892 "listen_address": { 00:11:21.892 "trtype": "TCP", 00:11:21.892 "adrfam": "IPv4", 00:11:21.892 "traddr": "10.0.0.2", 00:11:21.892 "trsvcid": "4420" 00:11:21.892 }, 00:11:21.892 "secure_channel": true 00:11:21.892 } 00:11:21.892 } 00:11:21.892 ] 00:11:21.892 } 00:11:21.892 ] 00:11:21.892 }' 00:11:21.892 04:26:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.892 04:26:25 -- common/autotest_common.sh@10 -- # set +x 00:11:21.892 04:26:25 -- nvmf/common.sh@469 -- # nvmfpid=65429 00:11:21.892 04:26:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:21.892 04:26:25 -- nvmf/common.sh@470 -- # waitforlisten 65429 00:11:21.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.892 04:26:25 -- common/autotest_common.sh@829 -- # '[' -z 65429 ']' 00:11:21.892 04:26:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.892 04:26:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.892 04:26:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.892 04:26:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.892 04:26:25 -- common/autotest_common.sh@10 -- # set +x 00:11:21.892 [2024-12-07 04:26:25.121471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:21.892 [2024-12-07 04:26:25.121552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.150 [2024-12-07 04:26:25.250279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.150 [2024-12-07 04:26:25.298216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:22.150 [2024-12-07 04:26:25.298379] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.150 [2024-12-07 04:26:25.298393] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.150 [2024-12-07 04:26:25.298400] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.151 [2024-12-07 04:26:25.298430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.409 [2024-12-07 04:26:25.478401] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.409 [2024-12-07 04:26:25.510353] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:22.409 [2024-12-07 04:26:25.510766] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.981 04:26:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.981 04:26:26 -- common/autotest_common.sh@862 -- # return 0 00:11:22.981 04:26:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:22.981 04:26:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.981 04:26:26 -- common/autotest_common.sh@10 -- # set +x 00:11:22.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:22.981 04:26:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.981 04:26:26 -- target/tls.sh@216 -- # bdevperf_pid=65461 00:11:22.981 04:26:26 -- target/tls.sh@217 -- # waitforlisten 65461 /var/tmp/bdevperf.sock 00:11:22.981 04:26:26 -- common/autotest_common.sh@829 -- # '[' -z 65461 ']' 00:11:22.981 04:26:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:22.981 04:26:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.981 04:26:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:22.981 04:26:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.981 04:26:26 -- common/autotest_common.sh@10 -- # set +x 00:11:22.981 04:26:26 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:22.981 04:26:26 -- target/tls.sh@213 -- # echo '{ 00:11:22.981 "subsystems": [ 00:11:22.981 { 00:11:22.981 "subsystem": "iobuf", 00:11:22.981 "config": [ 00:11:22.981 { 00:11:22.981 "method": "iobuf_set_options", 00:11:22.981 "params": { 00:11:22.981 "small_pool_count": 8192, 00:11:22.981 "large_pool_count": 1024, 00:11:22.981 "small_bufsize": 8192, 00:11:22.981 "large_bufsize": 135168 00:11:22.981 } 00:11:22.981 } 00:11:22.981 ] 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "subsystem": "sock", 00:11:22.981 "config": [ 00:11:22.981 { 00:11:22.981 "method": "sock_impl_set_options", 00:11:22.981 "params": { 00:11:22.981 "impl_name": "uring", 00:11:22.981 "recv_buf_size": 2097152, 00:11:22.981 "send_buf_size": 2097152, 00:11:22.981 "enable_recv_pipe": true, 00:11:22.981 "enable_quickack": false, 00:11:22.981 "enable_placement_id": 0, 00:11:22.981 "enable_zerocopy_send_server": false, 00:11:22.981 "enable_zerocopy_send_client": false, 00:11:22.981 "zerocopy_threshold": 0, 00:11:22.981 "tls_version": 0, 00:11:22.981 "enable_ktls": false 00:11:22.981 } 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "method": "sock_impl_set_options", 00:11:22.981 "params": { 00:11:22.981 "impl_name": "posix", 00:11:22.981 "recv_buf_size": 2097152, 00:11:22.981 "send_buf_size": 2097152, 00:11:22.981 "enable_recv_pipe": true, 00:11:22.981 "enable_quickack": false, 00:11:22.981 "enable_placement_id": 0, 00:11:22.981 "enable_zerocopy_send_server": true, 00:11:22.981 "enable_zerocopy_send_client": false, 00:11:22.981 "zerocopy_threshold": 0, 00:11:22.981 "tls_version": 0, 00:11:22.981 "enable_ktls": false 00:11:22.981 } 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "method": "sock_impl_set_options", 00:11:22.981 "params": { 00:11:22.981 "impl_name": "ssl", 00:11:22.981 "recv_buf_size": 4096, 00:11:22.981 "send_buf_size": 4096, 00:11:22.981 "enable_recv_pipe": true, 00:11:22.981 "enable_quickack": false, 00:11:22.981 "enable_placement_id": 0, 00:11:22.981 "enable_zerocopy_send_server": true, 00:11:22.981 "enable_zerocopy_send_client": false, 00:11:22.981 "zerocopy_threshold": 0, 00:11:22.981 "tls_version": 0, 00:11:22.981 "enable_ktls": false 00:11:22.981 } 00:11:22.981 } 00:11:22.981 ] 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "subsystem": "vmd", 00:11:22.981 "config": [] 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "subsystem": "accel", 00:11:22.981 "config": [ 00:11:22.981 { 00:11:22.981 "method": "accel_set_options", 00:11:22.981 "params": { 00:11:22.981 "small_cache_size": 128, 00:11:22.981 "large_cache_size": 16, 00:11:22.981 "task_count": 2048, 00:11:22.981 "sequence_count": 2048, 00:11:22.981 "buf_count": 2048 00:11:22.981 } 00:11:22.981 } 00:11:22.981 ] 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "subsystem": "bdev", 00:11:22.981 "config": [ 00:11:22.981 { 00:11:22.981 "method": "bdev_set_options", 00:11:22.981 "params": { 00:11:22.981 "bdev_io_pool_size": 65535, 00:11:22.981 "bdev_io_cache_size": 256, 00:11:22.981 "bdev_auto_examine": true, 00:11:22.981 "iobuf_small_cache_size": 128, 00:11:22.981 "iobuf_large_cache_size": 16 00:11:22.981 } 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "method": "bdev_raid_set_options", 00:11:22.981 "params": { 00:11:22.982 "process_window_size_kb": 1024 00:11:22.982 } 00:11:22.982 }, 00:11:22.982 { 00:11:22.982 "method": "bdev_iscsi_set_options", 00:11:22.982 "params": { 00:11:22.982 "timeout_sec": 30 00:11:22.982 } 00:11:22.982 }, 00:11:22.982 { 00:11:22.982 "method": "bdev_nvme_set_options", 00:11:22.982 "params": { 00:11:22.982 "action_on_timeout": "none", 00:11:22.982 "timeout_us": 0, 00:11:22.982 "timeout_admin_us": 0, 00:11:22.982 "keep_alive_timeout_ms": 10000, 00:11:22.982 "transport_retry_count": 4, 00:11:22.982 "arbitration_burst": 0, 00:11:22.982 "low_priority_weight": 0, 00:11:22.982 "medium_priority_weight": 0, 00:11:22.982 "high_priority_weight": 0, 00:11:22.982 "nvme_adminq_poll_period_us": 10000, 00:11:22.982 "nvme_ioq_poll_period_us": 0, 00:11:22.982 "io_queue_requests": 512, 00:11:22.982 "delay_cmd_submit": true, 00:11:22.982 "bdev_retry_count": 3, 00:11:22.982 "transport_ack_timeout": 0, 00:11:22.982 "ctrlr_loss_timeout_sec": 0, 00:11:22.982 "reconnect_delay_sec": 0, 00:11:22.982 "fast_io_fail_timeout_sec": 0, 00:11:22.982 "generate_uuids": false, 00:11:22.982 "transport_tos": 0, 00:11:22.982 "io_path_stat": false, 00:11:22.982 "allow_accel_sequence": false 00:11:22.982 } 00:11:22.982 }, 00:11:22.982 { 00:11:22.982 "method": "bdev_nvme_attach_controller", 00:11:22.982 "params": { 00:11:22.982 "name": "TLSTEST", 00:11:22.982 "trtype": "TCP", 00:11:22.982 "adrfam": "IPv4", 00:11:22.982 "traddr": "10.0.0.2", 00:11:22.982 "trsvcid": "4420", 00:11:22.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.982 "prchk_reftag": false, 00:11:22.982 "prchk_guard": false, 00:11:22.982 "ctrlr_loss_timeout_sec": 0, 00:11:22.982 "reconnect_delay_sec": 0, 00:11:22.982 "fast_io_fail_timeout_sec": 0, 00:11:22.982 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:22.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.982 "hdgst": false, 00:11:22.982 "ddgst": false 00:11:22.982 } 00:11:22.982 }, 00:11:22.982 { 00:11:22.982 "method": "bdev_nvme_set_hotplug", 00:11:22.982 "params": { 00:11:22.982 "period_us": 100000, 00:11:22.982 "enable": false 00:11:22.982 } 00:11:22.982 }, 00:11:22.982 { 00:11:22.982 "method": "bdev_wait_for_examine" 00:11:22.982 } 00:11:22.982 ] 00:11:22.982 }, 00:11:22.982 { 00:11:22.982 "subsystem": "nbd", 00:11:22.982 "config": [] 00:11:22.982 } 00:11:22.982 ] 00:11:22.982 }' 00:11:23.257 [2024-12-07 04:26:26.221330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.257 [2024-12-07 04:26:26.221668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65461 ] 00:11:23.257 [2024-12-07 04:26:26.362483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.257 [2024-12-07 04:26:26.434846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.525 [2024-12-07 04:26:26.560590] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:24.093 04:26:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.093 04:26:27 -- common/autotest_common.sh@862 -- # return 0 00:11:24.093 04:26:27 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:24.093 Running I/O for 10 seconds... 00:11:36.301 00:11:36.301 Latency(us) 00:11:36.301 [2024-12-07T04:26:39.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.301 [2024-12-07T04:26:39.541Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:36.301 Verification LBA range: start 0x0 length 0x2000 00:11:36.301 TLSTESTn1 : 10.01 6097.74 23.82 0.00 0.00 20957.08 4081.11 20256.58 00:11:36.301 [2024-12-07T04:26:39.541Z] =================================================================================================================== 00:11:36.301 [2024-12-07T04:26:39.541Z] Total : 6097.74 23.82 0.00 0.00 20957.08 4081.11 20256.58 00:11:36.301 0 00:11:36.301 04:26:37 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:36.301 04:26:37 -- target/tls.sh@223 -- # killprocess 65461 00:11:36.301 04:26:37 -- common/autotest_common.sh@936 -- # '[' -z 65461 ']' 00:11:36.301 04:26:37 -- common/autotest_common.sh@940 -- # kill -0 65461 00:11:36.301 04:26:37 -- common/autotest_common.sh@941 -- # uname 00:11:36.301 04:26:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:36.301 04:26:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65461 00:11:36.301 killing process with pid 65461 00:11:36.301 Received shutdown signal, test time was about 10.000000 seconds 00:11:36.301 00:11:36.301 Latency(us) 00:11:36.301 [2024-12-07T04:26:39.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.301 [2024-12-07T04:26:39.541Z] =================================================================================================================== 00:11:36.301 [2024-12-07T04:26:39.541Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:36.301 04:26:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:36.301 04:26:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:36.301 04:26:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65461' 00:11:36.301 04:26:37 -- common/autotest_common.sh@955 -- # kill 65461 00:11:36.301 04:26:37 -- common/autotest_common.sh@960 -- # wait 65461 00:11:36.301 04:26:37 -- target/tls.sh@224 -- # killprocess 65429 00:11:36.301 04:26:37 -- common/autotest_common.sh@936 -- # '[' -z 65429 ']' 00:11:36.301 04:26:37 -- common/autotest_common.sh@940 -- # kill -0 65429 00:11:36.301 04:26:37 -- common/autotest_common.sh@941 -- # uname 00:11:36.301 04:26:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:36.301 04:26:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65429 00:11:36.302 killing process with pid 65429 00:11:36.302 04:26:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:36.302 04:26:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:36.302 04:26:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65429' 00:11:36.302 04:26:37 -- common/autotest_common.sh@955 -- # kill 65429 00:11:36.302 04:26:37 -- common/autotest_common.sh@960 -- # wait 65429 00:11:36.302 04:26:37 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:11:36.302 04:26:37 -- target/tls.sh@227 -- # cleanup 00:11:36.302 04:26:37 -- target/tls.sh@15 -- # process_shm --id 0 00:11:36.302 04:26:37 -- common/autotest_common.sh@806 -- # type=--id 00:11:36.302 04:26:37 -- common/autotest_common.sh@807 -- # id=0 00:11:36.302 04:26:37 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:36.302 04:26:37 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:36.302 04:26:37 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:36.302 04:26:37 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:36.302 04:26:37 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:36.302 04:26:37 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:36.302 nvmf_trace.0 00:11:36.302 04:26:37 -- common/autotest_common.sh@821 -- # return 0 00:11:36.302 04:26:37 -- target/tls.sh@16 -- # killprocess 65461 00:11:36.302 04:26:37 -- common/autotest_common.sh@936 -- # '[' -z 65461 ']' 00:11:36.302 04:26:37 -- common/autotest_common.sh@940 -- # kill -0 65461 00:11:36.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65461) - No such process 00:11:36.302 Process with pid 65461 is not found 00:11:36.302 04:26:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65461 is not found' 00:11:36.302 04:26:37 -- target/tls.sh@17 -- # nvmftestfini 00:11:36.302 04:26:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:36.302 04:26:37 -- nvmf/common.sh@116 -- # sync 00:11:36.302 04:26:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:36.302 04:26:37 -- nvmf/common.sh@119 -- # set +e 00:11:36.302 04:26:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:36.302 04:26:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:36.302 rmmod nvme_tcp 00:11:36.302 rmmod nvme_fabrics 00:11:36.302 rmmod nvme_keyring 00:11:36.302 04:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:36.302 04:26:37 -- nvmf/common.sh@123 -- # set -e 00:11:36.302 04:26:37 -- nvmf/common.sh@124 -- # return 0 00:11:36.302 04:26:37 -- nvmf/common.sh@477 -- # '[' -n 65429 ']' 00:11:36.302 04:26:37 -- nvmf/common.sh@478 -- # killprocess 65429 00:11:36.302 04:26:37 -- common/autotest_common.sh@936 -- # '[' -z 65429 ']' 00:11:36.302 Process with pid 65429 is not found 00:11:36.302 04:26:37 -- common/autotest_common.sh@940 -- # kill -0 65429 00:11:36.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65429) - No such process 00:11:36.302 04:26:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65429 is not found' 00:11:36.302 04:26:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:36.302 04:26:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:36.302 04:26:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:36.302 04:26:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.302 04:26:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:36.302 04:26:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.302 04:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.302 04:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.302 04:26:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:36.302 04:26:37 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:36.302 ************************************ 00:11:36.302 END TEST nvmf_tls 00:11:36.302 ************************************ 00:11:36.302 00:11:36.302 real 1m10.136s 00:11:36.302 user 1m49.284s 00:11:36.302 sys 0m23.397s 00:11:36.302 04:26:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:36.302 04:26:37 -- common/autotest_common.sh@10 -- # set +x 00:11:36.302 04:26:38 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:36.302 04:26:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:36.302 04:26:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.302 04:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:36.302 ************************************ 00:11:36.302 START TEST nvmf_fips 00:11:36.302 ************************************ 00:11:36.302 04:26:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:36.302 * Looking for test storage... 00:11:36.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:11:36.302 04:26:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:36.302 04:26:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:36.302 04:26:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:36.302 04:26:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:36.302 04:26:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:36.302 04:26:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:36.302 04:26:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:36.302 04:26:38 -- scripts/common.sh@335 -- # IFS=.-: 00:11:36.302 04:26:38 -- scripts/common.sh@335 -- # read -ra ver1 00:11:36.302 04:26:38 -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.302 04:26:38 -- scripts/common.sh@336 -- # read -ra ver2 00:11:36.302 04:26:38 -- scripts/common.sh@337 -- # local 'op=<' 00:11:36.302 04:26:38 -- scripts/common.sh@339 -- # ver1_l=2 00:11:36.302 04:26:38 -- scripts/common.sh@340 -- # ver2_l=1 00:11:36.302 04:26:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:36.302 04:26:38 -- scripts/common.sh@343 -- # case "$op" in 00:11:36.302 04:26:38 -- scripts/common.sh@344 -- # : 1 00:11:36.302 04:26:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:36.302 04:26:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.302 04:26:38 -- scripts/common.sh@364 -- # decimal 1 00:11:36.302 04:26:38 -- scripts/common.sh@352 -- # local d=1 00:11:36.302 04:26:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.302 04:26:38 -- scripts/common.sh@354 -- # echo 1 00:11:36.302 04:26:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:36.302 04:26:38 -- scripts/common.sh@365 -- # decimal 2 00:11:36.302 04:26:38 -- scripts/common.sh@352 -- # local d=2 00:11:36.302 04:26:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.302 04:26:38 -- scripts/common.sh@354 -- # echo 2 00:11:36.302 04:26:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:36.302 04:26:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:36.302 04:26:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:36.302 04:26:38 -- scripts/common.sh@367 -- # return 0 00:11:36.302 04:26:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.302 04:26:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.302 --rc genhtml_branch_coverage=1 00:11:36.302 --rc genhtml_function_coverage=1 00:11:36.302 --rc genhtml_legend=1 00:11:36.302 --rc geninfo_all_blocks=1 00:11:36.302 --rc geninfo_unexecuted_blocks=1 00:11:36.302 00:11:36.302 ' 00:11:36.302 04:26:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.302 --rc genhtml_branch_coverage=1 00:11:36.302 --rc genhtml_function_coverage=1 00:11:36.302 --rc genhtml_legend=1 00:11:36.302 --rc geninfo_all_blocks=1 00:11:36.302 --rc geninfo_unexecuted_blocks=1 00:11:36.302 00:11:36.302 ' 00:11:36.302 04:26:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.302 --rc genhtml_branch_coverage=1 00:11:36.302 --rc genhtml_function_coverage=1 00:11:36.302 --rc genhtml_legend=1 00:11:36.302 --rc geninfo_all_blocks=1 00:11:36.302 --rc geninfo_unexecuted_blocks=1 00:11:36.302 00:11:36.302 ' 00:11:36.302 04:26:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:36.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.302 --rc genhtml_branch_coverage=1 00:11:36.302 --rc genhtml_function_coverage=1 00:11:36.302 --rc genhtml_legend=1 00:11:36.302 --rc geninfo_all_blocks=1 00:11:36.302 --rc geninfo_unexecuted_blocks=1 00:11:36.302 00:11:36.302 ' 00:11:36.302 04:26:38 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:36.302 04:26:38 -- nvmf/common.sh@7 -- # uname -s 00:11:36.302 04:26:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.302 04:26:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.302 04:26:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.302 04:26:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.302 04:26:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.302 04:26:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.302 04:26:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.302 04:26:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.302 04:26:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.302 04:26:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.302 04:26:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:11:36.302 04:26:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:11:36.302 04:26:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.302 04:26:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.302 04:26:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:36.302 04:26:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:36.302 04:26:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.302 04:26:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.302 04:26:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.302 04:26:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 04:26:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 04:26:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 04:26:38 -- paths/export.sh@5 -- # export PATH 00:11:36.303 04:26:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.303 04:26:38 -- nvmf/common.sh@46 -- # : 0 00:11:36.303 04:26:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:36.303 04:26:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:36.303 04:26:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:36.303 04:26:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.303 04:26:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.303 04:26:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:36.303 04:26:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:36.303 04:26:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:36.303 04:26:38 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.303 04:26:38 -- fips/fips.sh@89 -- # check_openssl_version 00:11:36.303 04:26:38 -- fips/fips.sh@83 -- # local target=3.0.0 00:11:36.303 04:26:38 -- fips/fips.sh@85 -- # openssl version 00:11:36.303 04:26:38 -- fips/fips.sh@85 -- # awk '{print $2}' 00:11:36.303 04:26:38 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:11:36.303 04:26:38 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:11:36.303 04:26:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:36.303 04:26:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:36.303 04:26:38 -- scripts/common.sh@335 -- # IFS=.-: 00:11:36.303 04:26:38 -- scripts/common.sh@335 -- # read -ra ver1 00:11:36.303 04:26:38 -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.303 04:26:38 -- scripts/common.sh@336 -- # read -ra ver2 00:11:36.303 04:26:38 -- scripts/common.sh@337 -- # local 'op=>=' 00:11:36.303 04:26:38 -- scripts/common.sh@339 -- # ver1_l=3 00:11:36.303 04:26:38 -- scripts/common.sh@340 -- # ver2_l=3 00:11:36.303 04:26:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:36.303 04:26:38 -- scripts/common.sh@343 -- # case "$op" in 00:11:36.303 04:26:38 -- scripts/common.sh@347 -- # : 1 00:11:36.303 04:26:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:36.303 04:26:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.303 04:26:38 -- scripts/common.sh@364 -- # decimal 3 00:11:36.303 04:26:38 -- scripts/common.sh@352 -- # local d=3 00:11:36.303 04:26:38 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:36.303 04:26:38 -- scripts/common.sh@354 -- # echo 3 00:11:36.303 04:26:38 -- scripts/common.sh@364 -- # ver1[v]=3 00:11:36.303 04:26:38 -- scripts/common.sh@365 -- # decimal 3 00:11:36.303 04:26:38 -- scripts/common.sh@352 -- # local d=3 00:11:36.303 04:26:38 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:36.303 04:26:38 -- scripts/common.sh@354 -- # echo 3 00:11:36.303 04:26:38 -- scripts/common.sh@365 -- # ver2[v]=3 00:11:36.303 04:26:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:36.303 04:26:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:36.303 04:26:38 -- scripts/common.sh@363 -- # (( v++ )) 00:11:36.303 04:26:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.303 04:26:38 -- scripts/common.sh@364 -- # decimal 1 00:11:36.303 04:26:38 -- scripts/common.sh@352 -- # local d=1 00:11:36.303 04:26:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.303 04:26:38 -- scripts/common.sh@354 -- # echo 1 00:11:36.303 04:26:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:36.303 04:26:38 -- scripts/common.sh@365 -- # decimal 0 00:11:36.303 04:26:38 -- scripts/common.sh@352 -- # local d=0 00:11:36.303 04:26:38 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:36.303 04:26:38 -- scripts/common.sh@354 -- # echo 0 00:11:36.303 04:26:38 -- scripts/common.sh@365 -- # ver2[v]=0 00:11:36.303 04:26:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:36.303 04:26:38 -- scripts/common.sh@366 -- # return 0 00:11:36.303 04:26:38 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:11:36.303 04:26:38 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:11:36.303 04:26:38 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:11:36.303 04:26:38 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:11:36.303 04:26:38 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:11:36.303 04:26:38 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:11:36.303 04:26:38 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:11:36.303 04:26:38 -- fips/fips.sh@113 -- # build_openssl_config 00:11:36.303 04:26:38 -- fips/fips.sh@37 -- # cat 00:11:36.303 04:26:38 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:11:36.303 04:26:38 -- fips/fips.sh@58 -- # cat - 00:11:36.303 04:26:38 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:11:36.303 04:26:38 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:11:36.303 04:26:38 -- fips/fips.sh@116 -- # mapfile -t providers 00:11:36.303 04:26:38 -- fips/fips.sh@116 -- # openssl list -providers 00:11:36.303 04:26:38 -- fips/fips.sh@116 -- # grep name 00:11:36.303 04:26:38 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:11:36.303 04:26:38 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:11:36.303 04:26:38 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:11:36.303 04:26:38 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:11:36.303 04:26:38 -- fips/fips.sh@127 -- # : 00:11:36.303 04:26:38 -- common/autotest_common.sh@650 -- # local es=0 00:11:36.303 04:26:38 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:11:36.303 04:26:38 -- common/autotest_common.sh@638 -- # local arg=openssl 00:11:36.303 04:26:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.303 04:26:38 -- common/autotest_common.sh@642 -- # type -t openssl 00:11:36.303 04:26:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.303 04:26:38 -- common/autotest_common.sh@644 -- # type -P openssl 00:11:36.303 04:26:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.303 04:26:38 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:11:36.303 04:26:38 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:11:36.303 04:26:38 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:11:36.303 Error setting digest 00:11:36.303 40024F336D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:11:36.303 40024F336D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:11:36.303 04:26:38 -- common/autotest_common.sh@653 -- # es=1 00:11:36.303 04:26:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.303 04:26:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.303 04:26:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.303 04:26:38 -- fips/fips.sh@130 -- # nvmftestinit 00:11:36.303 04:26:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:36.303 04:26:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.303 04:26:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:36.303 04:26:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:36.303 04:26:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:36.303 04:26:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.303 04:26:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.303 04:26:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.303 04:26:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:36.303 04:26:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:36.303 04:26:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:36.303 04:26:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:36.303 04:26:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:36.303 04:26:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:36.303 04:26:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.303 04:26:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.303 04:26:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:36.303 04:26:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:36.303 04:26:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:36.303 04:26:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:36.303 04:26:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:36.303 04:26:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.303 04:26:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:36.303 04:26:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:36.303 04:26:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:36.303 04:26:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:36.303 04:26:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:36.303 04:26:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:36.303 Cannot find device "nvmf_tgt_br" 00:11:36.303 04:26:38 -- nvmf/common.sh@154 -- # true 00:11:36.303 04:26:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:36.303 Cannot find device "nvmf_tgt_br2" 00:11:36.303 04:26:38 -- nvmf/common.sh@155 -- # true 00:11:36.303 04:26:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:36.303 04:26:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:36.303 Cannot find device "nvmf_tgt_br" 00:11:36.303 04:26:38 -- nvmf/common.sh@157 -- # true 00:11:36.303 04:26:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:36.303 Cannot find device "nvmf_tgt_br2" 00:11:36.303 04:26:38 -- nvmf/common.sh@158 -- # true 00:11:36.303 04:26:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:36.303 04:26:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:36.304 04:26:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:36.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.304 04:26:38 -- nvmf/common.sh@161 -- # true 00:11:36.304 04:26:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:36.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.304 04:26:38 -- nvmf/common.sh@162 -- # true 00:11:36.304 04:26:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:36.304 04:26:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:36.304 04:26:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:36.304 04:26:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:36.304 04:26:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:36.304 04:26:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:36.304 04:26:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:36.304 04:26:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:36.304 04:26:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:36.304 04:26:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:36.304 04:26:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:36.304 04:26:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:36.304 04:26:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:36.304 04:26:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:36.304 04:26:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:36.304 04:26:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:36.304 04:26:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:36.304 04:26:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:36.304 04:26:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:36.304 04:26:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:36.304 04:26:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:36.304 04:26:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:36.304 04:26:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:36.304 04:26:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:36.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:36.304 00:11:36.304 --- 10.0.0.2 ping statistics --- 00:11:36.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.304 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:36.304 04:26:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:36.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:36.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:11:36.304 00:11:36.304 --- 10.0.0.3 ping statistics --- 00:11:36.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.304 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:36.304 04:26:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:36.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:36.304 00:11:36.304 --- 10.0.0.1 ping statistics --- 00:11:36.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.304 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:36.304 04:26:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.304 04:26:38 -- nvmf/common.sh@421 -- # return 0 00:11:36.304 04:26:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:36.304 04:26:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.304 04:26:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:36.304 04:26:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:36.304 04:26:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.304 04:26:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:36.304 04:26:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:36.304 04:26:38 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:11:36.304 04:26:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:36.304 04:26:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.304 04:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:36.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.304 04:26:38 -- nvmf/common.sh@469 -- # nvmfpid=65811 00:11:36.304 04:26:38 -- nvmf/common.sh@470 -- # waitforlisten 65811 00:11:36.304 04:26:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:36.304 04:26:38 -- common/autotest_common.sh@829 -- # '[' -z 65811 ']' 00:11:36.304 04:26:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.304 04:26:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.304 04:26:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.304 04:26:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.304 04:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:36.304 [2024-12-07 04:26:38.825799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:36.304 [2024-12-07 04:26:38.825890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.304 [2024-12-07 04:26:38.965297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.304 [2024-12-07 04:26:39.017769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:36.304 [2024-12-07 04:26:39.017925] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.304 [2024-12-07 04:26:39.017939] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.304 [2024-12-07 04:26:39.017948] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.304 [2024-12-07 04:26:39.017979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.870 04:26:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.870 04:26:39 -- common/autotest_common.sh@862 -- # return 0 00:11:36.870 04:26:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:36.870 04:26:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.870 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:11:36.870 04:26:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.870 04:26:39 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:11:36.870 04:26:39 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:36.870 04:26:39 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:36.870 04:26:39 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:36.870 04:26:39 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:36.870 04:26:39 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:36.870 04:26:39 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:36.870 04:26:39 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:37.129 [2024-12-07 04:26:40.135347] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.129 [2024-12-07 04:26:40.151309] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:37.129 [2024-12-07 04:26:40.151542] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.129 malloc0 00:11:37.129 04:26:40 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:37.129 04:26:40 -- fips/fips.sh@147 -- # bdevperf_pid=65857 00:11:37.129 04:26:40 -- fips/fips.sh@148 -- # waitforlisten 65857 /var/tmp/bdevperf.sock 00:11:37.129 04:26:40 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:37.129 04:26:40 -- common/autotest_common.sh@829 -- # '[' -z 65857 ']' 00:11:37.129 04:26:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:37.129 04:26:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.129 04:26:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:37.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:37.129 04:26:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.129 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:11:37.129 [2024-12-07 04:26:40.268375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:37.129 [2024-12-07 04:26:40.268640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65857 ] 00:11:37.386 [2024-12-07 04:26:40.403815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.386 [2024-12-07 04:26:40.473841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.318 04:26:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.318 04:26:41 -- common/autotest_common.sh@862 -- # return 0 00:11:38.318 04:26:41 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:38.318 [2024-12-07 04:26:41.419726] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:38.318 TLSTESTn1 00:11:38.318 04:26:41 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:38.577 Running I/O for 10 seconds... 00:11:48.599 00:11:48.599 Latency(us) 00:11:48.599 [2024-12-07T04:26:51.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.599 [2024-12-07T04:26:51.839Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:48.599 Verification LBA range: start 0x0 length 0x2000 00:11:48.599 TLSTESTn1 : 10.02 6225.81 24.32 0.00 0.00 20523.84 4438.57 20852.36 00:11:48.599 [2024-12-07T04:26:51.839Z] =================================================================================================================== 00:11:48.599 [2024-12-07T04:26:51.839Z] Total : 6225.81 24.32 0.00 0.00 20523.84 4438.57 20852.36 00:11:48.599 0 00:11:48.599 04:26:51 -- fips/fips.sh@1 -- # cleanup 00:11:48.599 04:26:51 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:48.599 04:26:51 -- common/autotest_common.sh@806 -- # type=--id 00:11:48.599 04:26:51 -- common/autotest_common.sh@807 -- # id=0 00:11:48.599 04:26:51 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:48.599 04:26:51 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:48.599 04:26:51 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:48.599 04:26:51 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:48.599 04:26:51 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:48.599 04:26:51 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:48.599 nvmf_trace.0 00:11:48.599 04:26:51 -- common/autotest_common.sh@821 -- # return 0 00:11:48.599 04:26:51 -- fips/fips.sh@16 -- # killprocess 65857 00:11:48.599 04:26:51 -- common/autotest_common.sh@936 -- # '[' -z 65857 ']' 00:11:48.599 04:26:51 -- common/autotest_common.sh@940 -- # kill -0 65857 00:11:48.599 04:26:51 -- common/autotest_common.sh@941 -- # uname 00:11:48.599 04:26:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.599 04:26:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65857 00:11:48.599 killing process with pid 65857 00:11:48.599 Received shutdown signal, test time was about 10.000000 seconds 00:11:48.599 00:11:48.599 Latency(us) 00:11:48.599 [2024-12-07T04:26:51.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.599 [2024-12-07T04:26:51.839Z] =================================================================================================================== 00:11:48.599 [2024-12-07T04:26:51.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:48.599 04:26:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:48.599 04:26:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:48.599 04:26:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65857' 00:11:48.599 04:26:51 -- common/autotest_common.sh@955 -- # kill 65857 00:11:48.599 04:26:51 -- common/autotest_common.sh@960 -- # wait 65857 00:11:48.857 04:26:51 -- fips/fips.sh@17 -- # nvmftestfini 00:11:48.857 04:26:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:48.857 04:26:51 -- nvmf/common.sh@116 -- # sync 00:11:48.857 04:26:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:48.857 04:26:52 -- nvmf/common.sh@119 -- # set +e 00:11:48.857 04:26:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:48.857 04:26:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:48.857 rmmod nvme_tcp 00:11:48.857 rmmod nvme_fabrics 00:11:48.857 rmmod nvme_keyring 00:11:48.857 04:26:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:48.857 04:26:52 -- nvmf/common.sh@123 -- # set -e 00:11:48.857 04:26:52 -- nvmf/common.sh@124 -- # return 0 00:11:48.857 04:26:52 -- nvmf/common.sh@477 -- # '[' -n 65811 ']' 00:11:48.857 04:26:52 -- nvmf/common.sh@478 -- # killprocess 65811 00:11:48.857 04:26:52 -- common/autotest_common.sh@936 -- # '[' -z 65811 ']' 00:11:48.857 04:26:52 -- common/autotest_common.sh@940 -- # kill -0 65811 00:11:48.857 04:26:52 -- common/autotest_common.sh@941 -- # uname 00:11:48.857 04:26:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.857 04:26:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65811 00:11:49.115 killing process with pid 65811 00:11:49.115 04:26:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:49.115 04:26:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:49.115 04:26:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65811' 00:11:49.115 04:26:52 -- common/autotest_common.sh@955 -- # kill 65811 00:11:49.115 04:26:52 -- common/autotest_common.sh@960 -- # wait 65811 00:11:49.115 04:26:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:49.115 04:26:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:49.115 04:26:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:49.115 04:26:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.115 04:26:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:49.115 04:26:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.115 04:26:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.115 04:26:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.115 04:26:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:49.115 04:26:52 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:49.115 ************************************ 00:11:49.115 END TEST nvmf_fips 00:11:49.115 ************************************ 00:11:49.115 00:11:49.115 real 0m14.315s 00:11:49.115 user 0m19.527s 00:11:49.115 sys 0m5.685s 00:11:49.115 04:26:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:49.115 04:26:52 -- common/autotest_common.sh@10 -- # set +x 00:11:49.374 04:26:52 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:11:49.374 04:26:52 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:49.374 04:26:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:49.374 04:26:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.374 04:26:52 -- common/autotest_common.sh@10 -- # set +x 00:11:49.374 ************************************ 00:11:49.374 START TEST nvmf_fuzz 00:11:49.374 ************************************ 00:11:49.374 04:26:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:49.374 * Looking for test storage... 00:11:49.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.374 04:26:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:49.374 04:26:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:49.374 04:26:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:49.374 04:26:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:49.374 04:26:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:49.374 04:26:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:49.374 04:26:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:49.374 04:26:52 -- scripts/common.sh@335 -- # IFS=.-: 00:11:49.374 04:26:52 -- scripts/common.sh@335 -- # read -ra ver1 00:11:49.374 04:26:52 -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.374 04:26:52 -- scripts/common.sh@336 -- # read -ra ver2 00:11:49.374 04:26:52 -- scripts/common.sh@337 -- # local 'op=<' 00:11:49.374 04:26:52 -- scripts/common.sh@339 -- # ver1_l=2 00:11:49.374 04:26:52 -- scripts/common.sh@340 -- # ver2_l=1 00:11:49.374 04:26:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:49.374 04:26:52 -- scripts/common.sh@343 -- # case "$op" in 00:11:49.374 04:26:52 -- scripts/common.sh@344 -- # : 1 00:11:49.374 04:26:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:49.374 04:26:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.374 04:26:52 -- scripts/common.sh@364 -- # decimal 1 00:11:49.374 04:26:52 -- scripts/common.sh@352 -- # local d=1 00:11:49.374 04:26:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.374 04:26:52 -- scripts/common.sh@354 -- # echo 1 00:11:49.374 04:26:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:49.374 04:26:52 -- scripts/common.sh@365 -- # decimal 2 00:11:49.374 04:26:52 -- scripts/common.sh@352 -- # local d=2 00:11:49.374 04:26:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.374 04:26:52 -- scripts/common.sh@354 -- # echo 2 00:11:49.374 04:26:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:49.374 04:26:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:49.374 04:26:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:49.374 04:26:52 -- scripts/common.sh@367 -- # return 0 00:11:49.374 04:26:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.374 04:26:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:49.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.374 --rc genhtml_branch_coverage=1 00:11:49.374 --rc genhtml_function_coverage=1 00:11:49.374 --rc genhtml_legend=1 00:11:49.374 --rc geninfo_all_blocks=1 00:11:49.374 --rc geninfo_unexecuted_blocks=1 00:11:49.374 00:11:49.374 ' 00:11:49.374 04:26:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:49.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.374 --rc genhtml_branch_coverage=1 00:11:49.374 --rc genhtml_function_coverage=1 00:11:49.374 --rc genhtml_legend=1 00:11:49.374 --rc geninfo_all_blocks=1 00:11:49.374 --rc geninfo_unexecuted_blocks=1 00:11:49.374 00:11:49.374 ' 00:11:49.374 04:26:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:49.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.374 --rc genhtml_branch_coverage=1 00:11:49.374 --rc genhtml_function_coverage=1 00:11:49.374 --rc genhtml_legend=1 00:11:49.374 --rc geninfo_all_blocks=1 00:11:49.374 --rc geninfo_unexecuted_blocks=1 00:11:49.374 00:11:49.374 ' 00:11:49.374 04:26:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:49.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.374 --rc genhtml_branch_coverage=1 00:11:49.374 --rc genhtml_function_coverage=1 00:11:49.374 --rc genhtml_legend=1 00:11:49.374 --rc geninfo_all_blocks=1 00:11:49.374 --rc geninfo_unexecuted_blocks=1 00:11:49.374 00:11:49.374 ' 00:11:49.374 04:26:52 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.374 04:26:52 -- nvmf/common.sh@7 -- # uname -s 00:11:49.374 04:26:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.374 04:26:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.374 04:26:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.374 04:26:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.374 04:26:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.374 04:26:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.374 04:26:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.374 04:26:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.374 04:26:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.374 04:26:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.374 04:26:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:11:49.374 04:26:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:11:49.374 04:26:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.374 04:26:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.374 04:26:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.374 04:26:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.374 04:26:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.374 04:26:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.374 04:26:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.374 04:26:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.374 04:26:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.374 04:26:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.374 04:26:52 -- paths/export.sh@5 -- # export PATH 00:11:49.375 04:26:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.375 04:26:52 -- nvmf/common.sh@46 -- # : 0 00:11:49.375 04:26:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:49.375 04:26:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:49.375 04:26:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:49.375 04:26:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.375 04:26:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.375 04:26:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:49.375 04:26:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:49.375 04:26:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:49.375 04:26:52 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:11:49.375 04:26:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:49.375 04:26:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.375 04:26:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:49.375 04:26:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:49.375 04:26:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:49.375 04:26:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.375 04:26:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.375 04:26:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.375 04:26:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:49.375 04:26:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:49.375 04:26:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:49.375 04:26:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:49.375 04:26:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:49.375 04:26:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:49.375 04:26:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.375 04:26:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.375 04:26:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:49.375 04:26:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:49.375 04:26:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.375 04:26:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.375 04:26:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.375 04:26:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.375 04:26:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.375 04:26:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.375 04:26:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.375 04:26:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.375 04:26:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:49.375 04:26:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:49.375 Cannot find device "nvmf_tgt_br" 00:11:49.633 04:26:52 -- nvmf/common.sh@154 -- # true 00:11:49.633 04:26:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.633 Cannot find device "nvmf_tgt_br2" 00:11:49.633 04:26:52 -- nvmf/common.sh@155 -- # true 00:11:49.633 04:26:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:49.633 04:26:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:49.633 Cannot find device "nvmf_tgt_br" 00:11:49.633 04:26:52 -- nvmf/common.sh@157 -- # true 00:11:49.633 04:26:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:49.633 Cannot find device "nvmf_tgt_br2" 00:11:49.633 04:26:52 -- nvmf/common.sh@158 -- # true 00:11:49.633 04:26:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:49.633 04:26:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:49.633 04:26:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.633 04:26:52 -- nvmf/common.sh@161 -- # true 00:11:49.633 04:26:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.633 04:26:52 -- nvmf/common.sh@162 -- # true 00:11:49.633 04:26:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.633 04:26:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.633 04:26:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.633 04:26:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.633 04:26:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.633 04:26:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.633 04:26:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.633 04:26:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:49.633 04:26:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:49.633 04:26:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:49.634 04:26:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:49.634 04:26:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:49.634 04:26:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:49.634 04:26:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.634 04:26:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.634 04:26:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.634 04:26:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:49.634 04:26:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:49.634 04:26:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.634 04:26:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.892 04:26:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.892 04:26:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.892 04:26:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.892 04:26:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:49.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:49.892 00:11:49.892 --- 10.0.0.2 ping statistics --- 00:11:49.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.892 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:49.892 04:26:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:49.892 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.892 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:11:49.892 00:11:49.892 --- 10.0.0.3 ping statistics --- 00:11:49.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.893 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:49.893 04:26:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:49.893 00:11:49.893 --- 10.0.0.1 ping statistics --- 00:11:49.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.893 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:49.893 04:26:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.893 04:26:52 -- nvmf/common.sh@421 -- # return 0 00:11:49.893 04:26:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:49.893 04:26:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.893 04:26:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:49.893 04:26:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:49.893 04:26:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.893 04:26:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:49.893 04:26:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:49.893 04:26:52 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=66187 00:11:49.893 04:26:52 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:49.893 04:26:52 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:49.893 04:26:52 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 66187 00:11:49.893 04:26:52 -- common/autotest_common.sh@829 -- # '[' -z 66187 ']' 00:11:49.893 04:26:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.893 04:26:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.893 04:26:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.893 04:26:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.893 04:26:52 -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 04:26:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.830 04:26:54 -- common/autotest_common.sh@862 -- # return 0 00:11:50.830 04:26:54 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.830 04:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.830 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 04:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.830 04:26:54 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:11:50.830 04:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.830 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 Malloc0 00:11:50.830 04:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.830 04:26:54 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:50.830 04:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.830 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 04:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.830 04:26:54 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.830 04:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.830 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 04:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.830 04:26:54 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.830 04:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.830 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:50.830 04:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.830 04:26:54 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:11:51.089 04:26:54 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:11:51.348 Shutting down the fuzz application 00:11:51.348 04:26:54 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:11:51.606 Shutting down the fuzz application 00:11:51.606 04:26:54 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.606 04:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.606 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:51.606 04:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.606 04:26:54 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:51.606 04:26:54 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:11:51.606 04:26:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:51.606 04:26:54 -- nvmf/common.sh@116 -- # sync 00:11:51.606 04:26:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:51.606 04:26:54 -- nvmf/common.sh@119 -- # set +e 00:11:51.606 04:26:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:51.606 04:26:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:51.606 rmmod nvme_tcp 00:11:51.606 rmmod nvme_fabrics 00:11:51.606 rmmod nvme_keyring 00:11:51.606 04:26:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:51.606 04:26:54 -- nvmf/common.sh@123 -- # set -e 00:11:51.606 04:26:54 -- nvmf/common.sh@124 -- # return 0 00:11:51.606 04:26:54 -- nvmf/common.sh@477 -- # '[' -n 66187 ']' 00:11:51.606 04:26:54 -- nvmf/common.sh@478 -- # killprocess 66187 00:11:51.606 04:26:54 -- common/autotest_common.sh@936 -- # '[' -z 66187 ']' 00:11:51.606 04:26:54 -- common/autotest_common.sh@940 -- # kill -0 66187 00:11:51.606 04:26:54 -- common/autotest_common.sh@941 -- # uname 00:11:51.866 04:26:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.866 04:26:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66187 00:11:51.866 04:26:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:51.866 04:26:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:51.866 killing process with pid 66187 00:11:51.866 04:26:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66187' 00:11:51.866 04:26:54 -- common/autotest_common.sh@955 -- # kill 66187 00:11:51.866 04:26:54 -- common/autotest_common.sh@960 -- # wait 66187 00:11:51.866 04:26:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:51.866 04:26:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:51.866 04:26:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:51.866 04:26:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.866 04:26:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:51.866 04:26:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.866 04:26:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.866 04:26:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.866 04:26:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:51.866 04:26:55 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:11:52.124 00:11:52.124 real 0m2.727s 00:11:52.124 user 0m2.927s 00:11:52.124 sys 0m0.567s 00:11:52.124 04:26:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.124 ************************************ 00:11:52.124 END TEST nvmf_fuzz 00:11:52.124 ************************************ 00:11:52.124 04:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:52.124 04:26:55 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:52.124 04:26:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.124 04:26:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.124 04:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:52.124 ************************************ 00:11:52.124 START TEST nvmf_multiconnection 00:11:52.124 ************************************ 00:11:52.124 04:26:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:52.124 * Looking for test storage... 00:11:52.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.124 04:26:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:52.124 04:26:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:52.124 04:26:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:52.124 04:26:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:52.124 04:26:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:52.124 04:26:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:52.124 04:26:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:52.124 04:26:55 -- scripts/common.sh@335 -- # IFS=.-: 00:11:52.124 04:26:55 -- scripts/common.sh@335 -- # read -ra ver1 00:11:52.124 04:26:55 -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.124 04:26:55 -- scripts/common.sh@336 -- # read -ra ver2 00:11:52.124 04:26:55 -- scripts/common.sh@337 -- # local 'op=<' 00:11:52.124 04:26:55 -- scripts/common.sh@339 -- # ver1_l=2 00:11:52.124 04:26:55 -- scripts/common.sh@340 -- # ver2_l=1 00:11:52.124 04:26:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:52.124 04:26:55 -- scripts/common.sh@343 -- # case "$op" in 00:11:52.124 04:26:55 -- scripts/common.sh@344 -- # : 1 00:11:52.124 04:26:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:52.124 04:26:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.124 04:26:55 -- scripts/common.sh@364 -- # decimal 1 00:11:52.124 04:26:55 -- scripts/common.sh@352 -- # local d=1 00:11:52.124 04:26:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.124 04:26:55 -- scripts/common.sh@354 -- # echo 1 00:11:52.124 04:26:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:52.124 04:26:55 -- scripts/common.sh@365 -- # decimal 2 00:11:52.124 04:26:55 -- scripts/common.sh@352 -- # local d=2 00:11:52.124 04:26:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.124 04:26:55 -- scripts/common.sh@354 -- # echo 2 00:11:52.124 04:26:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:52.124 04:26:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:52.124 04:26:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:52.124 04:26:55 -- scripts/common.sh@367 -- # return 0 00:11:52.124 04:26:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.124 04:26:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.124 --rc genhtml_branch_coverage=1 00:11:52.124 --rc genhtml_function_coverage=1 00:11:52.124 --rc genhtml_legend=1 00:11:52.124 --rc geninfo_all_blocks=1 00:11:52.124 --rc geninfo_unexecuted_blocks=1 00:11:52.124 00:11:52.124 ' 00:11:52.124 04:26:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.124 --rc genhtml_branch_coverage=1 00:11:52.124 --rc genhtml_function_coverage=1 00:11:52.124 --rc genhtml_legend=1 00:11:52.124 --rc geninfo_all_blocks=1 00:11:52.124 --rc geninfo_unexecuted_blocks=1 00:11:52.124 00:11:52.124 ' 00:11:52.124 04:26:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.124 --rc genhtml_branch_coverage=1 00:11:52.124 --rc genhtml_function_coverage=1 00:11:52.124 --rc genhtml_legend=1 00:11:52.124 --rc geninfo_all_blocks=1 00:11:52.124 --rc geninfo_unexecuted_blocks=1 00:11:52.124 00:11:52.124 ' 00:11:52.124 04:26:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.124 --rc genhtml_branch_coverage=1 00:11:52.124 --rc genhtml_function_coverage=1 00:11:52.124 --rc genhtml_legend=1 00:11:52.124 --rc geninfo_all_blocks=1 00:11:52.124 --rc geninfo_unexecuted_blocks=1 00:11:52.124 00:11:52.124 ' 00:11:52.124 04:26:55 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.124 04:26:55 -- nvmf/common.sh@7 -- # uname -s 00:11:52.124 04:26:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.124 04:26:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.124 04:26:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.124 04:26:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.124 04:26:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.124 04:26:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.124 04:26:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.124 04:26:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.124 04:26:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.124 04:26:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.124 04:26:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:11:52.124 04:26:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:11:52.124 04:26:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.124 04:26:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.124 04:26:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.124 04:26:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.124 04:26:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.124 04:26:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.124 04:26:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.124 04:26:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.124 04:26:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.124 04:26:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.124 04:26:55 -- paths/export.sh@5 -- # export PATH 00:11:52.124 04:26:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.124 04:26:55 -- nvmf/common.sh@46 -- # : 0 00:11:52.124 04:26:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:52.124 04:26:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:52.124 04:26:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:52.124 04:26:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.124 04:26:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.124 04:26:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:52.124 04:26:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:52.124 04:26:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:52.381 04:26:55 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.381 04:26:55 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.381 04:26:55 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:11:52.381 04:26:55 -- target/multiconnection.sh@16 -- # nvmftestinit 00:11:52.381 04:26:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:52.381 04:26:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.381 04:26:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:52.381 04:26:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:52.381 04:26:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:52.381 04:26:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.381 04:26:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.381 04:26:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.381 04:26:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:52.381 04:26:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:52.381 04:26:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:52.381 04:26:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:52.381 04:26:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:52.381 04:26:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:52.381 04:26:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.381 04:26:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.381 04:26:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:52.381 04:26:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:52.382 04:26:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.382 04:26:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.382 04:26:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.382 04:26:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.382 04:26:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.382 04:26:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.382 04:26:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.382 04:26:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.382 04:26:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:52.382 04:26:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:52.382 Cannot find device "nvmf_tgt_br" 00:11:52.382 04:26:55 -- nvmf/common.sh@154 -- # true 00:11:52.382 04:26:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.382 Cannot find device "nvmf_tgt_br2" 00:11:52.382 04:26:55 -- nvmf/common.sh@155 -- # true 00:11:52.382 04:26:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:52.382 04:26:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:52.382 Cannot find device "nvmf_tgt_br" 00:11:52.382 04:26:55 -- nvmf/common.sh@157 -- # true 00:11:52.382 04:26:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:52.382 Cannot find device "nvmf_tgt_br2" 00:11:52.382 04:26:55 -- nvmf/common.sh@158 -- # true 00:11:52.382 04:26:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:52.382 04:26:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:52.382 04:26:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.382 04:26:55 -- nvmf/common.sh@161 -- # true 00:11:52.382 04:26:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.382 04:26:55 -- nvmf/common.sh@162 -- # true 00:11:52.382 04:26:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.382 04:26:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.382 04:26:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.382 04:26:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.382 04:26:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.382 04:26:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.382 04:26:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.382 04:26:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:52.640 04:26:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:52.640 04:26:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:52.640 04:26:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:52.640 04:26:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:52.640 04:26:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:52.640 04:26:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.640 04:26:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.640 04:26:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.640 04:26:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:52.640 04:26:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:52.640 04:26:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:52.640 04:26:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:52.640 04:26:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:52.640 04:26:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:52.640 04:26:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:52.640 04:26:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:52.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:11:52.640 00:11:52.640 --- 10.0.0.2 ping statistics --- 00:11:52.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.640 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:52.640 04:26:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:52.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:52.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:11:52.640 00:11:52.640 --- 10.0.0.3 ping statistics --- 00:11:52.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.640 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:52.640 04:26:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:52.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:52.640 00:11:52.640 --- 10.0.0.1 ping statistics --- 00:11:52.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.640 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:52.640 04:26:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.640 04:26:55 -- nvmf/common.sh@421 -- # return 0 00:11:52.640 04:26:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:52.640 04:26:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.640 04:26:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:52.640 04:26:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:52.640 04:26:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.640 04:26:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:52.640 04:26:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:52.640 04:26:55 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:11:52.640 04:26:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:52.640 04:26:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.640 04:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:52.640 04:26:55 -- nvmf/common.sh@469 -- # nvmfpid=66388 00:11:52.640 04:26:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.640 04:26:55 -- nvmf/common.sh@470 -- # waitforlisten 66388 00:11:52.640 04:26:55 -- common/autotest_common.sh@829 -- # '[' -z 66388 ']' 00:11:52.640 04:26:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.640 04:26:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.640 04:26:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.640 04:26:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.640 04:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:52.640 [2024-12-07 04:26:55.804089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:52.640 [2024-12-07 04:26:55.804208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.898 [2024-12-07 04:26:55.944167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.898 [2024-12-07 04:26:55.995805] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:52.898 [2024-12-07 04:26:55.995950] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.898 [2024-12-07 04:26:55.995963] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.898 [2024-12-07 04:26:55.995971] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.898 [2024-12-07 04:26:55.996116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.898 [2024-12-07 04:26:55.996358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.898 [2024-12-07 04:26:55.996482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.898 [2024-12-07 04:26:55.996582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.829 04:26:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.829 04:26:56 -- common/autotest_common.sh@862 -- # return 0 00:11:53.829 04:26:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:53.829 04:26:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.829 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.829 04:26:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.829 04:26:56 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.829 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.829 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.829 [2024-12-07 04:26:56.870062] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.829 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.829 04:26:56 -- target/multiconnection.sh@21 -- # seq 1 11 00:11:53.829 04:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:53.829 04:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:53.829 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.829 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.829 Malloc1 00:11:53.829 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.829 04:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:11:53.829 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.829 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.829 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.829 04:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.829 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.829 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.829 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.829 04:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.829 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.829 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 [2024-12-07 04:26:56.943090] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.830 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:53.830 04:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:11:53.830 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 Malloc2 00:11:53.830 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:56 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:53.830 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:56 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:11:53.830 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:56 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:53.830 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:56 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:53.830 04:26:56 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:11:53.830 04:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 Malloc3 00:11:53.830 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:11:53.830 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:11:53.830 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:53.830 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:53.830 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:11:53.830 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 Malloc4 00:11:53.830 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:11:53.830 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:11:53.830 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.830 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.830 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:53.830 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.830 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.087 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 Malloc5 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.087 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 Malloc6 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.087 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 Malloc7 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.087 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.087 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.087 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.087 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:11:54.087 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 Malloc8 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.088 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 Malloc9 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.088 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 Malloc10 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.088 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.088 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.088 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:11:54.088 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.346 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.346 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.346 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:11:54.346 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.346 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.346 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.346 04:26:57 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.346 04:26:57 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:11:54.346 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.346 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.346 Malloc11 00:11:54.346 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.346 04:26:57 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:11:54.346 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.346 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.346 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.346 04:26:57 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:11:54.346 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.346 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.346 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.346 04:26:57 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:11:54.346 04:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.346 04:26:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.346 04:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.346 04:26:57 -- target/multiconnection.sh@28 -- # seq 1 11 00:11:54.346 04:26:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.346 04:26:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.346 04:26:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:11:54.346 04:26:57 -- common/autotest_common.sh@1187 -- # local i=0 00:11:54.346 04:26:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.346 04:26:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:54.346 04:26:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:56.878 04:26:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:56.878 04:26:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:56.878 04:26:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:11:56.878 04:26:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:56.878 04:26:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.878 04:26:59 -- common/autotest_common.sh@1197 -- # return 0 00:11:56.878 04:26:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.878 04:26:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:11:56.878 04:26:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:11:56.878 04:26:59 -- common/autotest_common.sh@1187 -- # local i=0 00:11:56.878 04:26:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.878 04:26:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:56.878 04:26:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:58.792 04:27:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:58.792 04:27:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:58.792 04:27:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:11:58.792 04:27:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:58.792 04:27:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.792 04:27:01 -- common/autotest_common.sh@1197 -- # return 0 00:11:58.792 04:27:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:58.792 04:27:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:11:58.793 04:27:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:11:58.793 04:27:01 -- common/autotest_common.sh@1187 -- # local i=0 00:11:58.793 04:27:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.793 04:27:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:58.793 04:27:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:00.697 04:27:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:00.697 04:27:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:00.697 04:27:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:12:00.697 04:27:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:00.697 04:27:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.697 04:27:03 -- common/autotest_common.sh@1197 -- # return 0 00:12:00.697 04:27:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.697 04:27:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:00.955 04:27:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:00.955 04:27:04 -- common/autotest_common.sh@1187 -- # local i=0 00:12:00.955 04:27:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.955 04:27:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:00.955 04:27:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:02.859 04:27:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:02.859 04:27:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:02.859 04:27:06 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:12:02.859 04:27:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:02.859 04:27:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.859 04:27:06 -- common/autotest_common.sh@1197 -- # return 0 00:12:02.859 04:27:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:02.859 04:27:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:03.133 04:27:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:03.134 04:27:06 -- common/autotest_common.sh@1187 -- # local i=0 00:12:03.134 04:27:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.134 04:27:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:03.134 04:27:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:05.083 04:27:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:05.083 04:27:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:05.083 04:27:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:12:05.083 04:27:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:05.083 04:27:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.083 04:27:08 -- common/autotest_common.sh@1197 -- # return 0 00:12:05.083 04:27:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:05.083 04:27:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:05.343 04:27:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:05.343 04:27:08 -- common/autotest_common.sh@1187 -- # local i=0 00:12:05.343 04:27:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.343 04:27:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:05.343 04:27:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:07.249 04:27:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:07.250 04:27:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:07.250 04:27:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:07.250 04:27:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:07.250 04:27:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.250 04:27:10 -- common/autotest_common.sh@1197 -- # return 0 00:12:07.250 04:27:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:07.250 04:27:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:07.508 04:27:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:07.508 04:27:10 -- common/autotest_common.sh@1187 -- # local i=0 00:12:07.508 04:27:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.508 04:27:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:07.508 04:27:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:09.414 04:27:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:09.414 04:27:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:09.414 04:27:12 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:09.414 04:27:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:09.414 04:27:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.414 04:27:12 -- common/autotest_common.sh@1197 -- # return 0 00:12:09.414 04:27:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:09.414 04:27:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:09.674 04:27:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:09.674 04:27:12 -- common/autotest_common.sh@1187 -- # local i=0 00:12:09.674 04:27:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.674 04:27:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:09.674 04:27:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:11.572 04:27:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:11.572 04:27:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:11.572 04:27:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:11.572 04:27:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:11.572 04:27:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.572 04:27:14 -- common/autotest_common.sh@1197 -- # return 0 00:12:11.572 04:27:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.572 04:27:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:11.829 04:27:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:11.829 04:27:14 -- common/autotest_common.sh@1187 -- # local i=0 00:12:11.829 04:27:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.829 04:27:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:11.829 04:27:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:13.738 04:27:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:13.738 04:27:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:13.738 04:27:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:13.738 04:27:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:13.738 04:27:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.738 04:27:16 -- common/autotest_common.sh@1197 -- # return 0 00:12:13.738 04:27:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:13.738 04:27:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:13.995 04:27:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:13.995 04:27:17 -- common/autotest_common.sh@1187 -- # local i=0 00:12:13.995 04:27:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.995 04:27:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:13.995 04:27:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:15.897 04:27:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:15.897 04:27:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:15.897 04:27:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:15.897 04:27:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:15.897 04:27:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.897 04:27:19 -- common/autotest_common.sh@1197 -- # return 0 00:12:15.897 04:27:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:15.897 04:27:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:16.155 04:27:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:16.155 04:27:19 -- common/autotest_common.sh@1187 -- # local i=0 00:12:16.155 04:27:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.155 04:27:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:16.155 04:27:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:18.058 04:27:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:18.058 04:27:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:18.058 04:27:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:12:18.058 04:27:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:18.058 04:27:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.058 04:27:21 -- common/autotest_common.sh@1197 -- # return 0 00:12:18.058 04:27:21 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:18.058 [global] 00:12:18.058 thread=1 00:12:18.058 invalidate=1 00:12:18.058 rw=read 00:12:18.058 time_based=1 00:12:18.058 runtime=10 00:12:18.058 ioengine=libaio 00:12:18.058 direct=1 00:12:18.058 bs=262144 00:12:18.058 iodepth=64 00:12:18.058 norandommap=1 00:12:18.058 numjobs=1 00:12:18.058 00:12:18.058 [job0] 00:12:18.058 filename=/dev/nvme0n1 00:12:18.058 [job1] 00:12:18.058 filename=/dev/nvme10n1 00:12:18.058 [job2] 00:12:18.058 filename=/dev/nvme1n1 00:12:18.058 [job3] 00:12:18.058 filename=/dev/nvme2n1 00:12:18.058 [job4] 00:12:18.058 filename=/dev/nvme3n1 00:12:18.058 [job5] 00:12:18.058 filename=/dev/nvme4n1 00:12:18.058 [job6] 00:12:18.058 filename=/dev/nvme5n1 00:12:18.058 [job7] 00:12:18.058 filename=/dev/nvme6n1 00:12:18.058 [job8] 00:12:18.058 filename=/dev/nvme7n1 00:12:18.058 [job9] 00:12:18.058 filename=/dev/nvme8n1 00:12:18.058 [job10] 00:12:18.058 filename=/dev/nvme9n1 00:12:18.317 Could not set queue depth (nvme0n1) 00:12:18.317 Could not set queue depth (nvme10n1) 00:12:18.317 Could not set queue depth (nvme1n1) 00:12:18.317 Could not set queue depth (nvme2n1) 00:12:18.317 Could not set queue depth (nvme3n1) 00:12:18.317 Could not set queue depth (nvme4n1) 00:12:18.317 Could not set queue depth (nvme5n1) 00:12:18.317 Could not set queue depth (nvme6n1) 00:12:18.317 Could not set queue depth (nvme7n1) 00:12:18.317 Could not set queue depth (nvme8n1) 00:12:18.317 Could not set queue depth (nvme9n1) 00:12:18.317 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.317 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:18.318 fio-3.35 00:12:18.318 Starting 11 threads 00:12:30.585 00:12:30.585 job0: (groupid=0, jobs=1): err= 0: pid=66848: Sat Dec 7 04:27:31 2024 00:12:30.585 read: IOPS=908, BW=227MiB/s (238MB/s)(2293MiB/10095msec) 00:12:30.585 slat (usec): min=15, max=63673, avg=1073.44, stdev=2681.99 00:12:30.585 clat (msec): min=7, max=216, avg=69.27, stdev=33.43 00:12:30.585 lat (msec): min=7, max=216, avg=70.34, stdev=33.92 00:12:30.585 clat percentiles (msec): 00:12:30.585 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:12:30.585 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 64], 00:12:30.585 | 70.00th=[ 72], 80.00th=[ 113], 90.00th=[ 117], 95.00th=[ 122], 00:12:30.585 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 203], 99.95th=[ 211], 00:12:30.585 | 99.99th=[ 218] 00:12:30.585 bw ( KiB/s): min=136464, max=497664, per=12.46%, avg=233162.00, stdev=109657.31, samples=20 00:12:30.585 iops : min= 533, max= 1944, avg=910.65, stdev=428.43, samples=20 00:12:30.585 lat (msec) : 10=0.12%, 20=0.44%, 50=27.52%, 100=43.10%, 250=28.82% 00:12:30.585 cpu : usr=0.40%, sys=3.12%, ctx=2004, majf=0, minf=4097 00:12:30.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.585 issued rwts: total=9170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.585 job1: (groupid=0, jobs=1): err= 0: pid=66849: Sat Dec 7 04:27:31 2024 00:12:30.585 read: IOPS=703, BW=176MiB/s (185MB/s)(1764MiB/10023msec) 00:12:30.585 slat (usec): min=19, max=22436, avg=1412.61, stdev=3089.40 00:12:30.585 clat (msec): min=17, max=117, avg=89.39, stdev=12.35 00:12:30.585 lat (msec): min=18, max=120, avg=90.80, stdev=12.46 00:12:30.585 clat percentiles (msec): 00:12:30.585 | 1.00th=[ 49], 5.00th=[ 61], 10.00th=[ 74], 20.00th=[ 85], 00:12:30.585 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:12:30.585 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 102], 95.00th=[ 104], 00:12:30.585 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 116], 99.95th=[ 118], 00:12:30.585 | 99.99th=[ 118] 00:12:30.585 bw ( KiB/s): min=166400, max=238557, per=9.57%, avg=179043.90, stdev=16877.66, samples=20 00:12:30.585 iops : min= 650, max= 931, avg=699.25, stdev=65.79, samples=20 00:12:30.585 lat (msec) : 20=0.04%, 50=1.33%, 100=86.13%, 250=12.50% 00:12:30.585 cpu : usr=0.31%, sys=2.79%, ctx=1555, majf=0, minf=4097 00:12:30.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.585 issued rwts: total=7056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.585 job2: (groupid=0, jobs=1): err= 0: pid=66850: Sat Dec 7 04:27:31 2024 00:12:30.585 read: IOPS=582, BW=146MiB/s (153MB/s)(1471MiB/10095msec) 00:12:30.585 slat (usec): min=17, max=118247, avg=1694.85, stdev=4020.10 00:12:30.585 clat (msec): min=67, max=209, avg=107.96, stdev=15.58 00:12:30.585 lat (msec): min=71, max=234, avg=109.66, stdev=15.80 00:12:30.585 clat percentiles (msec): 00:12:30.585 | 1.00th=[ 80], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 93], 00:12:30.585 | 30.00th=[ 99], 40.00th=[ 107], 50.00th=[ 112], 60.00th=[ 114], 00:12:30.585 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 127], 00:12:30.585 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 201], 99.95th=[ 203], 00:12:30.585 | 99.99th=[ 209] 00:12:30.585 bw ( KiB/s): min=104448, max=179712, per=7.96%, avg=149016.55, stdev=19410.95, samples=20 00:12:30.585 iops : min= 408, max= 702, avg=582.00, stdev=75.79, samples=20 00:12:30.585 lat (msec) : 100=32.73%, 250=67.27% 00:12:30.585 cpu : usr=0.24%, sys=2.18%, ctx=1348, majf=0, minf=4097 00:12:30.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.585 issued rwts: total=5884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.585 job3: (groupid=0, jobs=1): err= 0: pid=66851: Sat Dec 7 04:27:31 2024 00:12:30.585 read: IOPS=618, BW=155MiB/s (162MB/s)(1562MiB/10091msec) 00:12:30.585 slat (usec): min=16, max=29920, avg=1595.82, stdev=3431.99 00:12:30.585 clat (msec): min=31, max=211, avg=101.66, stdev=15.51 00:12:30.585 lat (msec): min=31, max=211, avg=103.25, stdev=15.72 00:12:30.585 clat percentiles (msec): 00:12:30.585 | 1.00th=[ 74], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:12:30.585 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 107], 00:12:30.585 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 125], 00:12:30.585 | 99.00th=[ 132], 99.50th=[ 155], 99.90th=[ 192], 99.95th=[ 205], 00:12:30.585 | 99.99th=[ 211] 00:12:30.585 bw ( KiB/s): min=131584, max=183808, per=8.46%, avg=158284.80, stdev=18941.65, samples=20 00:12:30.585 iops : min= 514, max= 718, avg=618.30, stdev=73.99, samples=20 00:12:30.585 lat (msec) : 50=0.38%, 100=52.64%, 250=46.97% 00:12:30.585 cpu : usr=0.40%, sys=2.50%, ctx=1440, majf=0, minf=4097 00:12:30.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.585 issued rwts: total=6246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.585 job4: (groupid=0, jobs=1): err= 0: pid=66852: Sat Dec 7 04:27:31 2024 00:12:30.585 read: IOPS=585, BW=146MiB/s (153MB/s)(1478MiB/10100msec) 00:12:30.585 slat (usec): min=20, max=85544, avg=1688.29, stdev=3954.03 00:12:30.585 clat (msec): min=15, max=210, avg=107.50, stdev=14.85 00:12:30.585 lat (msec): min=15, max=210, avg=109.19, stdev=15.10 00:12:30.585 clat percentiles (msec): 00:12:30.585 | 1.00th=[ 80], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 93], 00:12:30.585 | 30.00th=[ 99], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 114], 00:12:30.585 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 127], 00:12:30.585 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 205], 99.95th=[ 211], 00:12:30.585 | 99.99th=[ 211] 00:12:30.585 bw ( KiB/s): min=117248, max=176128, per=8.00%, avg=149733.85, stdev=17830.95, samples=20 00:12:30.585 iops : min= 458, max= 688, avg=584.80, stdev=69.70, samples=20 00:12:30.585 lat (msec) : 20=0.02%, 100=34.04%, 250=65.94% 00:12:30.585 cpu : usr=0.48%, sys=2.40%, ctx=1391, majf=0, minf=4097 00:12:30.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.585 issued rwts: total=5913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.585 job5: (groupid=0, jobs=1): err= 0: pid=66853: Sat Dec 7 04:27:31 2024 00:12:30.585 read: IOPS=583, BW=146MiB/s (153MB/s)(1474MiB/10093msec) 00:12:30.585 slat (usec): min=19, max=47086, avg=1695.15, stdev=3786.35 00:12:30.585 clat (msec): min=63, max=198, avg=107.74, stdev=14.27 00:12:30.585 lat (msec): min=63, max=209, avg=109.43, stdev=14.50 00:12:30.585 clat percentiles (msec): 00:12:30.585 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 93], 00:12:30.585 | 30.00th=[ 100], 40.00th=[ 107], 50.00th=[ 111], 60.00th=[ 114], 00:12:30.585 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 127], 00:12:30.585 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 199], 99.95th=[ 199], 00:12:30.585 | 99.99th=[ 199] 00:12:30.585 bw ( KiB/s): min=123126, max=174592, per=7.98%, avg=149293.60, stdev=17010.98, samples=20 00:12:30.585 iops : min= 480, max= 682, avg=583.05, stdev=66.50, samples=20 00:12:30.585 lat (msec) : 100=32.12%, 250=67.88% 00:12:30.586 cpu : usr=0.22%, sys=2.10%, ctx=1360, majf=0, minf=4097 00:12:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.586 issued rwts: total=5894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.586 job6: (groupid=0, jobs=1): err= 0: pid=66854: Sat Dec 7 04:27:31 2024 00:12:30.586 read: IOPS=617, BW=154MiB/s (162MB/s)(1559MiB/10097msec) 00:12:30.586 slat (usec): min=19, max=27197, avg=1598.62, stdev=3518.43 00:12:30.586 clat (msec): min=15, max=217, avg=101.86, stdev=15.76 00:12:30.586 lat (msec): min=15, max=217, avg=103.46, stdev=15.97 00:12:30.586 clat percentiles (msec): 00:12:30.586 | 1.00th=[ 73], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 89], 00:12:30.586 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 100], 60.00th=[ 107], 00:12:30.586 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 120], 95.00th=[ 124], 00:12:30.586 | 99.00th=[ 133], 99.50th=[ 163], 99.90th=[ 203], 99.95th=[ 211], 00:12:30.586 | 99.99th=[ 218] 00:12:30.586 bw ( KiB/s): min=133899, max=179200, per=8.45%, avg=158027.25, stdev=18127.58, samples=20 00:12:30.586 iops : min= 523, max= 700, avg=617.20, stdev=70.89, samples=20 00:12:30.586 lat (msec) : 20=0.11%, 50=0.43%, 100=50.47%, 250=48.99% 00:12:30.586 cpu : usr=0.39%, sys=2.44%, ctx=1431, majf=0, minf=4097 00:12:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.586 issued rwts: total=6236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.586 job7: (groupid=0, jobs=1): err= 0: pid=66855: Sat Dec 7 04:27:31 2024 00:12:30.586 read: IOPS=674, BW=169MiB/s (177MB/s)(1691MiB/10025msec) 00:12:30.586 slat (usec): min=20, max=60319, avg=1457.68, stdev=3307.94 00:12:30.586 clat (msec): min=17, max=165, avg=93.27, stdev=10.71 00:12:30.586 lat (msec): min=18, max=165, avg=94.73, stdev=10.85 00:12:30.586 clat percentiles (msec): 00:12:30.586 | 1.00th=[ 58], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 87], 00:12:30.586 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:12:30.586 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 105], 95.00th=[ 111], 00:12:30.586 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 146], 99.95th=[ 153], 00:12:30.586 | 99.99th=[ 165] 00:12:30.586 bw ( KiB/s): min=129536, max=181248, per=9.17%, avg=171569.80, stdev=10646.34, samples=20 00:12:30.586 iops : min= 506, max= 708, avg=670.10, stdev=41.55, samples=20 00:12:30.586 lat (msec) : 20=0.01%, 50=0.67%, 100=78.95%, 250=20.37% 00:12:30.586 cpu : usr=0.24%, sys=2.70%, ctx=1538, majf=0, minf=4097 00:12:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.586 issued rwts: total=6765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.586 job8: (groupid=0, jobs=1): err= 0: pid=66856: Sat Dec 7 04:27:31 2024 00:12:30.586 read: IOPS=702, BW=176MiB/s (184MB/s)(1760MiB/10021msec) 00:12:30.586 slat (usec): min=16, max=24712, avg=1415.73, stdev=3093.82 00:12:30.586 clat (msec): min=19, max=123, avg=89.55, stdev=12.26 00:12:30.586 lat (msec): min=20, max=123, avg=90.97, stdev=12.35 00:12:30.586 clat percentiles (msec): 00:12:30.586 | 1.00th=[ 46], 5.00th=[ 62], 10.00th=[ 77], 20.00th=[ 85], 00:12:30.586 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:12:30.586 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 102], 95.00th=[ 105], 00:12:30.586 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 121], 00:12:30.586 | 99.99th=[ 124] 00:12:30.586 bw ( KiB/s): min=164681, max=236544, per=9.54%, avg=178583.55, stdev=16433.13, samples=20 00:12:30.586 iops : min= 643, max= 924, avg=697.50, stdev=64.20, samples=20 00:12:30.586 lat (msec) : 20=0.03%, 50=1.72%, 100=84.59%, 250=13.67% 00:12:30.586 cpu : usr=0.47%, sys=2.72%, ctx=1559, majf=0, minf=4097 00:12:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.586 issued rwts: total=7039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.586 job9: (groupid=0, jobs=1): err= 0: pid=66858: Sat Dec 7 04:27:31 2024 00:12:30.586 read: IOPS=732, BW=183MiB/s (192MB/s)(1849MiB/10097msec) 00:12:30.586 slat (usec): min=13, max=76149, avg=1346.49, stdev=3287.10 00:12:30.586 clat (msec): min=12, max=207, avg=85.89, stdev=30.83 00:12:30.586 lat (msec): min=15, max=208, avg=87.24, stdev=31.28 00:12:30.586 clat percentiles (msec): 00:12:30.586 | 1.00th=[ 34], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:12:30.586 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 110], 00:12:30.586 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 124], 95.00th=[ 128], 00:12:30.586 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 192], 99.95th=[ 209], 00:12:30.586 | 99.99th=[ 209] 00:12:30.586 bw ( KiB/s): min=112640, max=286208, per=10.03%, avg=187724.55, stdev=67026.76, samples=20 00:12:30.586 iops : min= 440, max= 1118, avg=733.20, stdev=261.90, samples=20 00:12:30.586 lat (msec) : 20=0.65%, 50=1.73%, 100=54.33%, 250=43.29% 00:12:30.586 cpu : usr=0.31%, sys=2.41%, ctx=1584, majf=0, minf=4097 00:12:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.586 issued rwts: total=7395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.586 job10: (groupid=0, jobs=1): err= 0: pid=66863: Sat Dec 7 04:27:31 2024 00:12:30.586 read: IOPS=616, BW=154MiB/s (162MB/s)(1556MiB/10093msec) 00:12:30.586 slat (usec): min=20, max=38150, avg=1604.36, stdev=3497.49 00:12:30.586 clat (msec): min=21, max=207, avg=102.04, stdev=14.81 00:12:30.586 lat (msec): min=22, max=218, avg=103.64, stdev=15.02 00:12:30.586 clat percentiles (msec): 00:12:30.586 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 90], 00:12:30.586 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 107], 00:12:30.586 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 123], 00:12:30.586 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 201], 99.95th=[ 201], 00:12:30.586 | 99.99th=[ 209] 00:12:30.586 bw ( KiB/s): min=132096, max=180224, per=8.43%, avg=157683.20, stdev=17700.29, samples=20 00:12:30.586 iops : min= 516, max= 704, avg=615.85, stdev=69.11, samples=20 00:12:30.586 lat (msec) : 50=0.58%, 100=49.63%, 250=49.79% 00:12:30.586 cpu : usr=0.29%, sys=2.13%, ctx=1432, majf=0, minf=4097 00:12:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:12:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:30.586 issued rwts: total=6222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:30.586 00:12:30.586 Run status group 0 (all jobs): 00:12:30.586 READ: bw=1827MiB/s (1916MB/s), 146MiB/s-227MiB/s (153MB/s-238MB/s), io=18.0GiB (19.4GB), run=10021-10100msec 00:12:30.586 00:12:30.586 Disk stats (read/write): 00:12:30.586 nvme0n1: ios=18223/0, merge=0/0, ticks=1232498/0, in_queue=1232498, util=97.73% 00:12:30.586 nvme10n1: ios=13991/0, merge=0/0, ticks=1234981/0, in_queue=1234981, util=97.93% 00:12:30.586 nvme1n1: ios=11647/0, merge=0/0, ticks=1232259/0, in_queue=1232259, util=98.13% 00:12:30.586 nvme2n1: ios=12365/0, merge=0/0, ticks=1228901/0, in_queue=1228901, util=98.09% 00:12:30.586 nvme3n1: ios=11700/0, merge=0/0, ticks=1230772/0, in_queue=1230772, util=98.35% 00:12:30.586 nvme4n1: ios=11661/0, merge=0/0, ticks=1230488/0, in_queue=1230488, util=98.36% 00:12:30.586 nvme5n1: ios=12354/0, merge=0/0, ticks=1229608/0, in_queue=1229608, util=98.61% 00:12:30.586 nvme6n1: ios=13417/0, merge=0/0, ticks=1235566/0, in_queue=1235566, util=98.76% 00:12:30.586 nvme7n1: ios=13973/0, merge=0/0, ticks=1233707/0, in_queue=1233707, util=98.95% 00:12:30.586 nvme8n1: ios=14672/0, merge=0/0, ticks=1231369/0, in_queue=1231369, util=99.06% 00:12:30.586 nvme9n1: ios=12318/0, merge=0/0, ticks=1230207/0, in_queue=1230207, util=99.06% 00:12:30.586 04:27:31 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:30.586 [global] 00:12:30.586 thread=1 00:12:30.586 invalidate=1 00:12:30.586 rw=randwrite 00:12:30.586 time_based=1 00:12:30.586 runtime=10 00:12:30.586 ioengine=libaio 00:12:30.586 direct=1 00:12:30.586 bs=262144 00:12:30.586 iodepth=64 00:12:30.586 norandommap=1 00:12:30.586 numjobs=1 00:12:30.586 00:12:30.586 [job0] 00:12:30.586 filename=/dev/nvme0n1 00:12:30.586 [job1] 00:12:30.586 filename=/dev/nvme10n1 00:12:30.586 [job2] 00:12:30.586 filename=/dev/nvme1n1 00:12:30.586 [job3] 00:12:30.586 filename=/dev/nvme2n1 00:12:30.586 [job4] 00:12:30.586 filename=/dev/nvme3n1 00:12:30.586 [job5] 00:12:30.586 filename=/dev/nvme4n1 00:12:30.586 [job6] 00:12:30.586 filename=/dev/nvme5n1 00:12:30.586 [job7] 00:12:30.586 filename=/dev/nvme6n1 00:12:30.586 [job8] 00:12:30.586 filename=/dev/nvme7n1 00:12:30.586 [job9] 00:12:30.586 filename=/dev/nvme8n1 00:12:30.587 [job10] 00:12:30.587 filename=/dev/nvme9n1 00:12:30.587 Could not set queue depth (nvme0n1) 00:12:30.587 Could not set queue depth (nvme10n1) 00:12:30.587 Could not set queue depth (nvme1n1) 00:12:30.587 Could not set queue depth (nvme2n1) 00:12:30.587 Could not set queue depth (nvme3n1) 00:12:30.587 Could not set queue depth (nvme4n1) 00:12:30.587 Could not set queue depth (nvme5n1) 00:12:30.587 Could not set queue depth (nvme6n1) 00:12:30.587 Could not set queue depth (nvme7n1) 00:12:30.587 Could not set queue depth (nvme8n1) 00:12:30.587 Could not set queue depth (nvme9n1) 00:12:30.587 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:30.587 fio-3.35 00:12:30.587 Starting 11 threads 00:12:40.562 00:12:40.562 job0: (groupid=0, jobs=1): err= 0: pid=67059: Sat Dec 7 04:27:42 2024 00:12:40.562 write: IOPS=1131, BW=283MiB/s (297MB/s)(2843MiB/10050msec); 0 zone resets 00:12:40.562 slat (usec): min=17, max=6805, avg=875.49, stdev=1467.44 00:12:40.562 clat (usec): min=7532, max=99584, avg=55663.94, stdev=3383.31 00:12:40.562 lat (usec): min=7558, max=99622, avg=56539.43, stdev=3163.58 00:12:40.562 clat percentiles (usec): 00:12:40.562 | 1.00th=[50070], 5.00th=[52167], 10.00th=[52691], 20.00th=[53740], 00:12:40.562 | 30.00th=[54789], 40.00th=[55313], 50.00th=[56361], 60.00th=[56886], 00:12:40.562 | 70.00th=[56886], 80.00th=[57410], 90.00th=[57934], 95.00th=[58459], 00:12:40.562 | 99.00th=[58983], 99.50th=[60031], 99.90th=[89654], 99.95th=[95945], 00:12:40.562 | 99.99th=[99091] 00:12:40.562 bw ( KiB/s): min=283136, max=297984, per=20.01%, avg=289506.80, stdev=3554.41, samples=20 00:12:40.562 iops : min= 1106, max= 1164, avg=1130.85, stdev=13.86, samples=20 00:12:40.562 lat (msec) : 10=0.04%, 20=0.14%, 50=0.67%, 100=99.16% 00:12:40.562 cpu : usr=1.47%, sys=2.41%, ctx=12555, majf=0, minf=1 00:12:40.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:40.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.562 issued rwts: total=0,11373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.562 job1: (groupid=0, jobs=1): err= 0: pid=67063: Sat Dec 7 04:27:42 2024 00:12:40.562 write: IOPS=333, BW=83.5MiB/s (87.5MB/s)(848MiB/10163msec); 0 zone resets 00:12:40.562 slat (usec): min=18, max=50543, avg=2941.97, stdev=5233.55 00:12:40.562 clat (msec): min=21, max=342, avg=188.68, stdev=23.05 00:12:40.562 lat (msec): min=21, max=342, avg=191.62, stdev=22.81 00:12:40.562 clat percentiles (msec): 00:12:40.562 | 1.00th=[ 92], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 182], 00:12:40.562 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 197], 00:12:40.562 | 70.00th=[ 199], 80.00th=[ 199], 90.00th=[ 201], 95.00th=[ 207], 00:12:40.562 | 99.00th=[ 251], 99.50th=[ 296], 99.90th=[ 330], 99.95th=[ 342], 00:12:40.562 | 99.99th=[ 342] 00:12:40.562 bw ( KiB/s): min=79872, max=98304, per=5.89%, avg=85248.00, stdev=4855.84, samples=20 00:12:40.562 iops : min= 312, max= 384, avg=333.00, stdev=18.97, samples=20 00:12:40.562 lat (msec) : 50=0.47%, 100=0.59%, 250=97.94%, 500=1.00% 00:12:40.562 cpu : usr=0.71%, sys=0.93%, ctx=3144, majf=0, minf=1 00:12:40.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:12:40.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.562 issued rwts: total=0,3393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.562 job2: (groupid=0, jobs=1): err= 0: pid=67072: Sat Dec 7 04:27:42 2024 00:12:40.562 write: IOPS=681, BW=170MiB/s (179MB/s)(1716MiB/10077msec); 0 zone resets 00:12:40.562 slat (usec): min=13, max=74664, avg=1451.24, stdev=2616.79 00:12:40.562 clat (msec): min=72, max=176, avg=92.47, stdev=10.89 00:12:40.562 lat (msec): min=76, max=176, avg=93.93, stdev=10.76 00:12:40.562 clat percentiles (msec): 00:12:40.562 | 1.00th=[ 83], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:12:40.562 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:12:40.562 | 70.00th=[ 92], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 124], 00:12:40.562 | 99.00th=[ 127], 99.50th=[ 140], 99.90th=[ 167], 99.95th=[ 167], 00:12:40.562 | 99.99th=[ 176] 00:12:40.562 bw ( KiB/s): min=116736, max=184832, per=12.03%, avg=174087.60, stdev=18398.52, samples=20 00:12:40.562 iops : min= 456, max= 722, avg=680.00, stdev=71.86, samples=20 00:12:40.562 lat (msec) : 100=89.77%, 250=10.23% 00:12:40.562 cpu : usr=1.27%, sys=1.78%, ctx=4423, majf=0, minf=1 00:12:40.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:40.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,6864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job3: (groupid=0, jobs=1): err= 0: pid=67073: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=498, BW=125MiB/s (131MB/s)(1259MiB/10104msec); 0 zone resets 00:12:40.563 slat (usec): min=18, max=65465, avg=1982.27, stdev=3490.63 00:12:40.563 clat (msec): min=69, max=216, avg=126.35, stdev= 7.59 00:12:40.563 lat (msec): min=69, max=216, avg=128.33, stdev= 6.88 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 122], 00:12:40.563 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:12:40.563 | 70.00th=[ 129], 80.00th=[ 130], 90.00th=[ 131], 95.00th=[ 132], 00:12:40.563 | 99.00th=[ 150], 99.50th=[ 171], 99.90th=[ 209], 99.95th=[ 209], 00:12:40.563 | 99.99th=[ 218] 00:12:40.563 bw ( KiB/s): min=112640, max=131072, per=8.80%, avg=127308.80, stdev=3949.46, samples=20 00:12:40.563 iops : min= 440, max= 512, avg=497.30, stdev=15.43, samples=20 00:12:40.563 lat (msec) : 100=0.42%, 250=99.58% 00:12:40.563 cpu : usr=0.69%, sys=1.07%, ctx=6900, majf=0, minf=1 00:12:40.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:12:40.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,5037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job4: (groupid=0, jobs=1): err= 0: pid=67074: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=357, BW=89.3MiB/s (93.6MB/s)(908MiB/10162msec); 0 zone resets 00:12:40.563 slat (usec): min=17, max=46624, avg=2722.88, stdev=4887.83 00:12:40.563 clat (msec): min=8, max=335, avg=176.37, stdev=34.47 00:12:40.563 lat (msec): min=8, max=335, avg=179.10, stdev=34.71 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 47], 5.00th=[ 117], 10.00th=[ 123], 20.00th=[ 157], 00:12:40.563 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:12:40.563 | 70.00th=[ 197], 80.00th=[ 199], 90.00th=[ 199], 95.00th=[ 201], 00:12:40.563 | 99.00th=[ 232], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 334], 00:12:40.563 | 99.99th=[ 334] 00:12:40.563 bw ( KiB/s): min=81920, max=133120, per=6.31%, avg=91289.60, stdev=16437.89, samples=20 00:12:40.563 iops : min= 320, max= 520, avg=356.60, stdev=64.21, samples=20 00:12:40.563 lat (msec) : 10=0.14%, 20=0.19%, 50=0.77%, 100=0.72%, 250=97.36% 00:12:40.563 lat (msec) : 500=0.83% 00:12:40.563 cpu : usr=0.45%, sys=0.93%, ctx=3624, majf=0, minf=1 00:12:40.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:12:40.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,3630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job5: (groupid=0, jobs=1): err= 0: pid=67075: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=333, BW=83.5MiB/s (87.6MB/s)(849MiB/10168msec); 0 zone resets 00:12:40.563 slat (usec): min=21, max=38636, avg=2941.01, stdev=5196.33 00:12:40.563 clat (msec): min=23, max=342, avg=188.60, stdev=24.07 00:12:40.563 lat (msec): min=23, max=342, avg=191.54, stdev=23.86 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 94], 5.00th=[ 148], 10.00th=[ 163], 20.00th=[ 182], 00:12:40.563 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 199], 00:12:40.563 | 70.00th=[ 199], 80.00th=[ 201], 90.00th=[ 203], 95.00th=[ 207], 00:12:40.563 | 99.00th=[ 251], 99.50th=[ 296], 99.90th=[ 330], 99.95th=[ 342], 00:12:40.563 | 99.99th=[ 342] 00:12:40.563 bw ( KiB/s): min=79872, max=104448, per=5.89%, avg=85299.20, stdev=6135.91, samples=20 00:12:40.563 iops : min= 312, max= 408, avg=333.20, stdev=23.97, samples=20 00:12:40.563 lat (msec) : 50=0.47%, 100=0.59%, 250=97.94%, 500=1.00% 00:12:40.563 cpu : usr=0.52%, sys=1.14%, ctx=3780, majf=0, minf=1 00:12:40.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:12:40.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,3396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job6: (groupid=0, jobs=1): err= 0: pid=67076: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=500, BW=125MiB/s (131MB/s)(1266MiB/10107msec); 0 zone resets 00:12:40.563 slat (usec): min=17, max=30171, avg=1970.18, stdev=3376.97 00:12:40.563 clat (msec): min=32, max=222, avg=125.77, stdev= 9.47 00:12:40.563 lat (msec): min=32, max=222, avg=127.74, stdev= 9.01 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 103], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:12:40.563 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:12:40.563 | 70.00th=[ 129], 80.00th=[ 130], 90.00th=[ 131], 95.00th=[ 132], 00:12:40.563 | 99.00th=[ 133], 99.50th=[ 178], 99.90th=[ 215], 99.95th=[ 215], 00:12:40.563 | 99.99th=[ 224] 00:12:40.563 bw ( KiB/s): min=124928, max=131584, per=8.84%, avg=127987.10, stdev=1747.16, samples=20 00:12:40.563 iops : min= 488, max= 514, avg=499.90, stdev= 6.85, samples=20 00:12:40.563 lat (msec) : 50=0.32%, 100=0.59%, 250=99.09% 00:12:40.563 cpu : usr=1.02%, sys=1.38%, ctx=5709, majf=0, minf=1 00:12:40.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:40.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,5062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job7: (groupid=0, jobs=1): err= 0: pid=67077: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=331, BW=82.8MiB/s (86.9MB/s)(842MiB/10164msec); 0 zone resets 00:12:40.563 slat (usec): min=18, max=52568, avg=2965.16, stdev=5291.55 00:12:40.563 clat (msec): min=37, max=339, avg=190.10, stdev=21.43 00:12:40.563 lat (msec): min=37, max=339, avg=193.06, stdev=21.10 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 111], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 184], 00:12:40.563 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 199], 00:12:40.563 | 70.00th=[ 199], 80.00th=[ 201], 90.00th=[ 205], 95.00th=[ 207], 00:12:40.563 | 99.00th=[ 249], 99.50th=[ 296], 99.90th=[ 330], 99.95th=[ 338], 00:12:40.563 | 99.99th=[ 338] 00:12:40.563 bw ( KiB/s): min=79872, max=102400, per=5.84%, avg=84582.40, stdev=5020.95, samples=20 00:12:40.563 iops : min= 312, max= 400, avg=330.40, stdev=19.61, samples=20 00:12:40.563 lat (msec) : 50=0.24%, 100=0.59%, 250=98.28%, 500=0.89% 00:12:40.563 cpu : usr=0.68%, sys=0.96%, ctx=2960, majf=0, minf=1 00:12:40.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:12:40.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,3368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job8: (groupid=0, jobs=1): err= 0: pid=67078: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=501, BW=125MiB/s (131MB/s)(1267MiB/10107msec); 0 zone resets 00:12:40.563 slat (usec): min=18, max=24877, avg=1967.32, stdev=3363.83 00:12:40.563 clat (msec): min=15, max=223, avg=125.62, stdev=11.37 00:12:40.563 lat (msec): min=15, max=223, avg=127.59, stdev=11.04 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 74], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:12:40.563 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:12:40.563 | 70.00th=[ 129], 80.00th=[ 130], 90.00th=[ 131], 95.00th=[ 132], 00:12:40.563 | 99.00th=[ 146], 99.50th=[ 178], 99.90th=[ 215], 99.95th=[ 215], 00:12:40.563 | 99.99th=[ 224] 00:12:40.563 bw ( KiB/s): min=124928, max=131584, per=8.85%, avg=128140.90, stdev=1871.11, samples=20 00:12:40.563 iops : min= 488, max= 514, avg=500.55, stdev= 7.31, samples=20 00:12:40.563 lat (msec) : 20=0.16%, 50=0.47%, 100=0.55%, 250=98.82% 00:12:40.563 cpu : usr=1.03%, sys=1.41%, ctx=6278, majf=0, minf=1 00:12:40.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:40.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,5068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job9: (groupid=0, jobs=1): err= 0: pid=67079: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=334, BW=83.6MiB/s (87.7MB/s)(850MiB/10162msec); 0 zone resets 00:12:40.563 slat (usec): min=17, max=32065, avg=2877.39, stdev=5150.81 00:12:40.563 clat (msec): min=17, max=341, avg=188.28, stdev=24.10 00:12:40.563 lat (msec): min=17, max=341, avg=191.15, stdev=23.98 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 88], 5.00th=[ 148], 10.00th=[ 165], 20.00th=[ 182], 00:12:40.563 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 199], 00:12:40.563 | 70.00th=[ 199], 80.00th=[ 199], 90.00th=[ 203], 95.00th=[ 205], 00:12:40.563 | 99.00th=[ 239], 99.50th=[ 296], 99.90th=[ 330], 99.95th=[ 342], 00:12:40.563 | 99.99th=[ 342] 00:12:40.563 bw ( KiB/s): min=81920, max=102400, per=5.90%, avg=85436.60, stdev=5420.12, samples=20 00:12:40.563 iops : min= 320, max= 400, avg=333.70, stdev=21.11, samples=20 00:12:40.563 lat (msec) : 20=0.12%, 50=0.35%, 100=0.71%, 250=97.91%, 500=0.91% 00:12:40.563 cpu : usr=0.59%, sys=1.10%, ctx=3692, majf=0, minf=1 00:12:40.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:12:40.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.563 issued rwts: total=0,3400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.563 job10: (groupid=0, jobs=1): err= 0: pid=67080: Sat Dec 7 04:27:42 2024 00:12:40.563 write: IOPS=683, BW=171MiB/s (179MB/s)(1722MiB/10077msec); 0 zone resets 00:12:40.563 slat (usec): min=17, max=31716, avg=1446.48, stdev=2482.24 00:12:40.563 clat (msec): min=33, max=161, avg=92.18, stdev=10.76 00:12:40.563 lat (msec): min=33, max=161, avg=93.63, stdev=10.65 00:12:40.563 clat percentiles (msec): 00:12:40.563 | 1.00th=[ 82], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:12:40.563 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:12:40.563 | 70.00th=[ 92], 80.00th=[ 92], 90.00th=[ 102], 95.00th=[ 123], 00:12:40.564 | 99.00th=[ 127], 99.50th=[ 131], 99.90th=[ 148], 99.95th=[ 155], 00:12:40.564 | 99.99th=[ 161] 00:12:40.564 bw ( KiB/s): min=128512, max=184832, per=12.07%, avg=174650.80, stdev=16576.45, samples=20 00:12:40.564 iops : min= 502, max= 722, avg=682.20, stdev=64.74, samples=20 00:12:40.564 lat (msec) : 50=0.16%, 100=89.79%, 250=10.05% 00:12:40.564 cpu : usr=1.15%, sys=2.02%, ctx=12490, majf=0, minf=1 00:12:40.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:40.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:40.564 issued rwts: total=0,6886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.564 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.564 00:12:40.564 Run status group 0 (all jobs): 00:12:40.564 WRITE: bw=1413MiB/s (1482MB/s), 82.8MiB/s-283MiB/s (86.9MB/s-297MB/s), io=14.0GiB (15.1GB), run=10050-10168msec 00:12:40.564 00:12:40.564 Disk stats (read/write): 00:12:40.564 nvme0n1: ios=49/22575, merge=0/0, ticks=45/1215919, in_queue=1215964, util=97.94% 00:12:40.564 nvme10n1: ios=49/6643, merge=0/0, ticks=47/1208695, in_queue=1208742, util=98.04% 00:12:40.564 nvme1n1: ios=28/13528, merge=0/0, ticks=78/1210447, in_queue=1210525, util=97.87% 00:12:40.564 nvme2n1: ios=0/9900, merge=0/0, ticks=0/1211412, in_queue=1211412, util=97.86% 00:12:40.564 nvme3n1: ios=0/7108, merge=0/0, ticks=0/1209003, in_queue=1209003, util=97.99% 00:12:40.564 nvme4n1: ios=0/6647, merge=0/0, ticks=0/1208882, in_queue=1208882, util=98.33% 00:12:40.564 nvme5n1: ios=0/9961, merge=0/0, ticks=0/1212307, in_queue=1212307, util=98.35% 00:12:40.564 nvme6n1: ios=0/6588, merge=0/0, ticks=0/1208495, in_queue=1208495, util=98.44% 00:12:40.564 nvme7n1: ios=0/9978, merge=0/0, ticks=0/1212006, in_queue=1212006, util=98.71% 00:12:40.564 nvme8n1: ios=0/6654, merge=0/0, ticks=0/1208569, in_queue=1208569, util=98.77% 00:12:40.564 nvme9n1: ios=0/13578, merge=0/0, ticks=0/1211865, in_queue=1211865, util=98.78% 00:12:40.564 04:27:42 -- target/multiconnection.sh@36 -- # sync 00:12:40.564 04:27:42 -- target/multiconnection.sh@37 -- # seq 1 11 00:12:40.564 04:27:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.564 04:27:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:12:40.564 04:27:42 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:12:40.564 04:27:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:12:40.564 04:27:42 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.564 04:27:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:12:40.564 04:27:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:12:40.564 04:27:42 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:12:40.564 04:27:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:12:40.564 04:27:42 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:40.564 04:27:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:42 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:42 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:42 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:12:40.564 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:12:40.564 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:12:40.564 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:40.564 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:12:40.564 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:12:40.564 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:40.564 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:12:40.564 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:12:40.564 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:12:40.564 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:12:40.564 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:12:40.564 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:12:40.564 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:12:40.564 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:12:40.564 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:12:40.564 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:12:40.564 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:12:40.564 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:12:40.564 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:12:40.564 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:12:40.564 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.564 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:12:40.564 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.564 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.564 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.564 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.564 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:12:40.564 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:12:40.564 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:12:40.564 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.564 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:12:40.564 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.565 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:12:40.565 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.565 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:12:40.565 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.565 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.565 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.565 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:12:40.565 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:12:40.565 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:12:40.565 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.565 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.565 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:12:40.565 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.565 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:12:40.565 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.565 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:12:40.565 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.565 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.565 04:27:43 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.565 04:27:43 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:12:40.565 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:12:40.565 04:27:43 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:12:40.565 04:27:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.565 04:27:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:12:40.565 04:27:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.565 04:27:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.565 04:27:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:12:40.565 04:27:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.565 04:27:43 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:12:40.565 04:27:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.565 04:27:43 -- common/autotest_common.sh@10 -- # set +x 00:12:40.565 04:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.565 04:27:43 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:12:40.565 04:27:43 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:40.565 04:27:43 -- target/multiconnection.sh@47 -- # nvmftestfini 00:12:40.565 04:27:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:40.565 04:27:43 -- nvmf/common.sh@116 -- # sync 00:12:40.565 04:27:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:40.565 04:27:43 -- nvmf/common.sh@119 -- # set +e 00:12:40.565 04:27:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:40.565 04:27:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:40.565 rmmod nvme_tcp 00:12:40.565 rmmod nvme_fabrics 00:12:40.823 rmmod nvme_keyring 00:12:40.823 04:27:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:40.823 04:27:43 -- nvmf/common.sh@123 -- # set -e 00:12:40.823 04:27:43 -- nvmf/common.sh@124 -- # return 0 00:12:40.823 04:27:43 -- nvmf/common.sh@477 -- # '[' -n 66388 ']' 00:12:40.823 04:27:43 -- nvmf/common.sh@478 -- # killprocess 66388 00:12:40.823 04:27:43 -- common/autotest_common.sh@936 -- # '[' -z 66388 ']' 00:12:40.823 04:27:43 -- common/autotest_common.sh@940 -- # kill -0 66388 00:12:40.823 04:27:43 -- common/autotest_common.sh@941 -- # uname 00:12:40.823 04:27:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:40.823 04:27:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66388 00:12:40.823 04:27:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:40.823 04:27:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:40.823 killing process with pid 66388 00:12:40.823 04:27:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66388' 00:12:40.823 04:27:43 -- common/autotest_common.sh@955 -- # kill 66388 00:12:40.824 04:27:43 -- common/autotest_common.sh@960 -- # wait 66388 00:12:41.082 04:27:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.082 04:27:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.082 04:27:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.082 04:27:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.082 04:27:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.082 04:27:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.082 04:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.082 04:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.082 04:27:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:41.082 00:12:41.082 real 0m49.059s 00:12:41.082 user 2m40.364s 00:12:41.082 sys 0m35.303s 00:12:41.082 04:27:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.082 04:27:44 -- common/autotest_common.sh@10 -- # set +x 00:12:41.082 ************************************ 00:12:41.082 END TEST nvmf_multiconnection 00:12:41.082 ************************************ 00:12:41.082 04:27:44 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:41.082 04:27:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:41.082 04:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.082 04:27:44 -- common/autotest_common.sh@10 -- # set +x 00:12:41.082 ************************************ 00:12:41.082 START TEST nvmf_initiator_timeout 00:12:41.082 ************************************ 00:12:41.082 04:27:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:41.341 * Looking for test storage... 00:12:41.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:41.341 04:27:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:41.341 04:27:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:41.341 04:27:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:41.341 04:27:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:41.341 04:27:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:41.341 04:27:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.341 04:27:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.341 04:27:44 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.341 04:27:44 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.341 04:27:44 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.341 04:27:44 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.341 04:27:44 -- scripts/common.sh@337 -- # local 'op=<' 00:12:41.341 04:27:44 -- scripts/common.sh@339 -- # ver1_l=2 00:12:41.341 04:27:44 -- scripts/common.sh@340 -- # ver2_l=1 00:12:41.341 04:27:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.341 04:27:44 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.341 04:27:44 -- scripts/common.sh@344 -- # : 1 00:12:41.341 04:27:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.341 04:27:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.341 04:27:44 -- scripts/common.sh@364 -- # decimal 1 00:12:41.341 04:27:44 -- scripts/common.sh@352 -- # local d=1 00:12:41.341 04:27:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.341 04:27:44 -- scripts/common.sh@354 -- # echo 1 00:12:41.341 04:27:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.341 04:27:44 -- scripts/common.sh@365 -- # decimal 2 00:12:41.341 04:27:44 -- scripts/common.sh@352 -- # local d=2 00:12:41.341 04:27:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.341 04:27:44 -- scripts/common.sh@354 -- # echo 2 00:12:41.341 04:27:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:41.341 04:27:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.341 04:27:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.341 04:27:44 -- scripts/common.sh@367 -- # return 0 00:12:41.341 04:27:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.341 04:27:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.341 --rc genhtml_branch_coverage=1 00:12:41.341 --rc genhtml_function_coverage=1 00:12:41.341 --rc genhtml_legend=1 00:12:41.341 --rc geninfo_all_blocks=1 00:12:41.341 --rc geninfo_unexecuted_blocks=1 00:12:41.341 00:12:41.341 ' 00:12:41.341 04:27:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.341 --rc genhtml_branch_coverage=1 00:12:41.341 --rc genhtml_function_coverage=1 00:12:41.341 --rc genhtml_legend=1 00:12:41.341 --rc geninfo_all_blocks=1 00:12:41.341 --rc geninfo_unexecuted_blocks=1 00:12:41.341 00:12:41.341 ' 00:12:41.341 04:27:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.341 --rc genhtml_branch_coverage=1 00:12:41.341 --rc genhtml_function_coverage=1 00:12:41.341 --rc genhtml_legend=1 00:12:41.341 --rc geninfo_all_blocks=1 00:12:41.341 --rc geninfo_unexecuted_blocks=1 00:12:41.341 00:12:41.341 ' 00:12:41.341 04:27:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:41.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.341 --rc genhtml_branch_coverage=1 00:12:41.341 --rc genhtml_function_coverage=1 00:12:41.341 --rc genhtml_legend=1 00:12:41.341 --rc geninfo_all_blocks=1 00:12:41.341 --rc geninfo_unexecuted_blocks=1 00:12:41.341 00:12:41.341 ' 00:12:41.341 04:27:44 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.341 04:27:44 -- nvmf/common.sh@7 -- # uname -s 00:12:41.341 04:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.341 04:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.341 04:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.341 04:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.341 04:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.341 04:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.341 04:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.341 04:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.341 04:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.341 04:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.341 04:27:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:12:41.341 04:27:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:12:41.341 04:27:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.341 04:27:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.341 04:27:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.341 04:27:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.341 04:27:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.341 04:27:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.341 04:27:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.341 04:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.341 04:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.341 04:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.341 04:27:44 -- paths/export.sh@5 -- # export PATH 00:12:41.342 04:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.342 04:27:44 -- nvmf/common.sh@46 -- # : 0 00:12:41.342 04:27:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.342 04:27:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.342 04:27:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.342 04:27:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.342 04:27:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.342 04:27:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.342 04:27:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.342 04:27:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.342 04:27:44 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:41.342 04:27:44 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:41.342 04:27:44 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:12:41.342 04:27:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:41.342 04:27:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.342 04:27:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:41.342 04:27:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:41.342 04:27:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:41.342 04:27:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.342 04:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.342 04:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.342 04:27:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:41.342 04:27:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:41.342 04:27:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:41.342 04:27:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:41.342 04:27:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:41.342 04:27:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:41.342 04:27:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.342 04:27:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.342 04:27:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:41.342 04:27:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:41.342 04:27:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.342 04:27:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.342 04:27:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.342 04:27:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.342 04:27:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.342 04:27:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.342 04:27:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.342 04:27:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.342 04:27:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:41.342 04:27:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:41.342 Cannot find device "nvmf_tgt_br" 00:12:41.342 04:27:44 -- nvmf/common.sh@154 -- # true 00:12:41.342 04:27:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.342 Cannot find device "nvmf_tgt_br2" 00:12:41.342 04:27:44 -- nvmf/common.sh@155 -- # true 00:12:41.342 04:27:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:41.342 04:27:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:41.342 Cannot find device "nvmf_tgt_br" 00:12:41.342 04:27:44 -- nvmf/common.sh@157 -- # true 00:12:41.342 04:27:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:41.342 Cannot find device "nvmf_tgt_br2" 00:12:41.342 04:27:44 -- nvmf/common.sh@158 -- # true 00:12:41.342 04:27:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:41.601 04:27:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:41.601 04:27:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.601 04:27:44 -- nvmf/common.sh@161 -- # true 00:12:41.601 04:27:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.601 04:27:44 -- nvmf/common.sh@162 -- # true 00:12:41.601 04:27:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.601 04:27:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.601 04:27:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.601 04:27:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.601 04:27:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.601 04:27:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.601 04:27:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.601 04:27:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:41.601 04:27:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:41.601 04:27:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:41.601 04:27:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:41.601 04:27:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:41.601 04:27:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:41.601 04:27:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:41.601 04:27:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:41.601 04:27:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:41.601 04:27:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:41.601 04:27:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:41.601 04:27:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:41.601 04:27:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:41.601 04:27:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:41.601 04:27:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:41.601 04:27:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:41.601 04:27:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:41.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:41.601 00:12:41.601 --- 10.0.0.2 ping statistics --- 00:12:41.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.601 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:41.601 04:27:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:41.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:41.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:12:41.601 00:12:41.601 --- 10.0.0.3 ping statistics --- 00:12:41.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.601 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:41.601 04:27:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:41.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:41.601 00:12:41.601 --- 10.0.0.1 ping statistics --- 00:12:41.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.601 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:41.601 04:27:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.601 04:27:44 -- nvmf/common.sh@421 -- # return 0 00:12:41.601 04:27:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:41.601 04:27:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.601 04:27:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:41.601 04:27:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:41.601 04:27:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.601 04:27:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:41.601 04:27:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:41.601 04:27:44 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:12:41.601 04:27:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:41.601 04:27:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.601 04:27:44 -- common/autotest_common.sh@10 -- # set +x 00:12:41.601 04:27:44 -- nvmf/common.sh@469 -- # nvmfpid=67454 00:12:41.601 04:27:44 -- nvmf/common.sh@470 -- # waitforlisten 67454 00:12:41.601 04:27:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.601 04:27:44 -- common/autotest_common.sh@829 -- # '[' -z 67454 ']' 00:12:41.601 04:27:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.601 04:27:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.601 04:27:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.601 04:27:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.601 04:27:44 -- common/autotest_common.sh@10 -- # set +x 00:12:41.860 [2024-12-07 04:27:44.913077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:41.860 [2024-12-07 04:27:44.913209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.860 [2024-12-07 04:27:45.058694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.117 [2024-12-07 04:27:45.116294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:42.117 [2024-12-07 04:27:45.116435] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.117 [2024-12-07 04:27:45.116448] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.117 [2024-12-07 04:27:45.116457] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.117 [2024-12-07 04:27:45.116587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.117 [2024-12-07 04:27:45.116702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.117 [2024-12-07 04:27:45.116793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.117 [2024-12-07 04:27:45.116794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.681 04:27:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.681 04:27:45 -- common/autotest_common.sh@862 -- # return 0 00:12:42.681 04:27:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:42.681 04:27:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:42.681 04:27:45 -- common/autotest_common.sh@10 -- # set +x 00:12:42.681 04:27:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.681 04:27:45 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:42.681 04:27:45 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:42.681 04:27:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.681 04:27:45 -- common/autotest_common.sh@10 -- # set +x 00:12:42.938 Malloc0 00:12:42.938 04:27:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.938 04:27:45 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:12:42.938 04:27:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.938 04:27:45 -- common/autotest_common.sh@10 -- # set +x 00:12:42.938 Delay0 00:12:42.938 04:27:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.938 04:27:45 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:42.938 04:27:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.938 04:27:45 -- common/autotest_common.sh@10 -- # set +x 00:12:42.938 [2024-12-07 04:27:45.962602] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.938 04:27:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.938 04:27:45 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:42.938 04:27:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.938 04:27:45 -- common/autotest_common.sh@10 -- # set +x 00:12:42.938 04:27:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.938 04:27:45 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.938 04:27:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.938 04:27:45 -- common/autotest_common.sh@10 -- # set +x 00:12:42.938 04:27:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.938 04:27:45 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.938 04:27:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.938 04:27:45 -- common/autotest_common.sh@10 -- # set +x 00:12:42.938 [2024-12-07 04:27:45.994838] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.938 04:27:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.938 04:27:45 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.938 04:27:46 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.938 04:27:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:42.938 04:27:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.938 04:27:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:42.938 04:27:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:45.461 04:27:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:45.461 04:27:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:45.461 04:27:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.461 04:27:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:45.461 04:27:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.461 04:27:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:45.461 04:27:48 -- target/initiator_timeout.sh@35 -- # fio_pid=67521 00:12:45.461 04:27:48 -- target/initiator_timeout.sh@37 -- # sleep 3 00:12:45.461 04:27:48 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:12:45.461 [global] 00:12:45.461 thread=1 00:12:45.461 invalidate=1 00:12:45.461 rw=write 00:12:45.461 time_based=1 00:12:45.461 runtime=60 00:12:45.461 ioengine=libaio 00:12:45.461 direct=1 00:12:45.461 bs=4096 00:12:45.461 iodepth=1 00:12:45.461 norandommap=0 00:12:45.461 numjobs=1 00:12:45.461 00:12:45.461 verify_dump=1 00:12:45.461 verify_backlog=512 00:12:45.461 verify_state_save=0 00:12:45.461 do_verify=1 00:12:45.461 verify=crc32c-intel 00:12:45.461 [job0] 00:12:45.461 filename=/dev/nvme0n1 00:12:45.461 Could not set queue depth (nvme0n1) 00:12:45.461 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:45.461 fio-3.35 00:12:45.461 Starting 1 thread 00:12:47.991 04:27:51 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:12:47.991 04:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.991 04:27:51 -- common/autotest_common.sh@10 -- # set +x 00:12:47.991 true 00:12:47.991 04:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.991 04:27:51 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:12:47.991 04:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.991 04:27:51 -- common/autotest_common.sh@10 -- # set +x 00:12:47.991 true 00:12:47.991 04:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.991 04:27:51 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:12:47.991 04:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.991 04:27:51 -- common/autotest_common.sh@10 -- # set +x 00:12:47.991 true 00:12:47.991 04:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.991 04:27:51 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:12:47.991 04:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.991 04:27:51 -- common/autotest_common.sh@10 -- # set +x 00:12:47.991 true 00:12:47.991 04:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.991 04:27:51 -- target/initiator_timeout.sh@45 -- # sleep 3 00:12:51.280 04:27:54 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:12:51.280 04:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.280 04:27:54 -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 true 00:12:51.280 04:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.280 04:27:54 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:12:51.280 04:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.280 04:27:54 -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 true 00:12:51.280 04:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.280 04:27:54 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:12:51.280 04:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.280 04:27:54 -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 true 00:12:51.280 04:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.280 04:27:54 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:12:51.280 04:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.280 04:27:54 -- common/autotest_common.sh@10 -- # set +x 00:12:51.280 true 00:12:51.280 04:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.280 04:27:54 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:12:51.280 04:27:54 -- target/initiator_timeout.sh@54 -- # wait 67521 00:13:47.506 00:13:47.506 job0: (groupid=0, jobs=1): err= 0: pid=67552: Sat Dec 7 04:28:48 2024 00:13:47.506 read: IOPS=815, BW=3263KiB/s (3342kB/s)(191MiB/60000msec) 00:13:47.506 slat (usec): min=9, max=8979, avg=13.00, stdev=51.26 00:13:47.506 clat (usec): min=156, max=40409k, avg=1031.08, stdev=182642.74 00:13:47.506 lat (usec): min=166, max=40409k, avg=1044.08, stdev=182642.75 00:13:47.506 clat percentiles (usec): 00:13:47.506 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:13:47.506 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:13:47.506 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 243], 00:13:47.506 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 388], 99.95th=[ 603], 00:13:47.506 | 99.99th=[ 1303] 00:13:47.506 write: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec); 0 zone resets 00:13:47.506 slat (usec): min=12, max=618, avg=19.81, stdev= 7.87 00:13:47.506 clat (usec): min=14, max=827, avg=158.14, stdev=20.93 00:13:47.506 lat (usec): min=132, max=1031, avg=177.95, stdev=22.91 00:13:47.506 clat percentiles (usec): 00:13:47.506 | 1.00th=[ 123], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 141], 00:13:47.506 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:13:47.506 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 194], 00:13:47.506 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 262], 99.95th=[ 318], 00:13:47.506 | 99.99th=[ 562] 00:13:47.506 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=9872.41, stdev=1656.34, samples=39 00:13:47.506 iops : min= 1024, max= 3072, avg=2468.10, stdev=414.08, samples=39 00:13:47.506 lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=98.61%, 500=1.35% 00:13:47.506 lat (usec) : 750=0.02%, 1000=0.01% 00:13:47.506 lat (msec) : 2=0.01%, >=2000=0.01% 00:13:47.506 cpu : usr=0.57%, sys=2.04%, ctx=98114, majf=0, minf=5 00:13:47.506 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:47.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.507 issued rwts: total=48949,49152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:47.507 00:13:47.507 Run status group 0 (all jobs): 00:13:47.507 READ: bw=3263KiB/s (3342kB/s), 3263KiB/s-3263KiB/s (3342kB/s-3342kB/s), io=191MiB (200MB), run=60000-60000msec 00:13:47.507 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:13:47.507 00:13:47.507 Disk stats (read/write): 00:13:47.507 nvme0n1: ios=48897/48967, merge=0/0, ticks=10540/8443, in_queue=18983, util=99.92% 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.507 04:28:48 -- common/autotest_common.sh@1208 -- # local i=0 00:13:47.507 04:28:48 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:47.507 04:28:48 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.507 04:28:48 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:47.507 04:28:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.507 04:28:48 -- common/autotest_common.sh@1220 -- # return 0 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:13:47.507 nvmf hotplug test: fio successful as expected 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.507 04:28:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.507 04:28:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 04:28:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:13:47.507 04:28:48 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:13:47.507 04:28:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:47.507 04:28:48 -- nvmf/common.sh@116 -- # sync 00:13:47.507 04:28:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:47.507 04:28:48 -- nvmf/common.sh@119 -- # set +e 00:13:47.507 04:28:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:47.507 04:28:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:47.507 rmmod nvme_tcp 00:13:47.507 rmmod nvme_fabrics 00:13:47.507 rmmod nvme_keyring 00:13:47.507 04:28:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:47.507 04:28:48 -- nvmf/common.sh@123 -- # set -e 00:13:47.507 04:28:48 -- nvmf/common.sh@124 -- # return 0 00:13:47.507 04:28:48 -- nvmf/common.sh@477 -- # '[' -n 67454 ']' 00:13:47.507 04:28:48 -- nvmf/common.sh@478 -- # killprocess 67454 00:13:47.507 04:28:48 -- common/autotest_common.sh@936 -- # '[' -z 67454 ']' 00:13:47.507 04:28:48 -- common/autotest_common.sh@940 -- # kill -0 67454 00:13:47.507 04:28:48 -- common/autotest_common.sh@941 -- # uname 00:13:47.507 04:28:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:47.507 04:28:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67454 00:13:47.507 killing process with pid 67454 00:13:47.507 04:28:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:47.507 04:28:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:47.507 04:28:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67454' 00:13:47.507 04:28:48 -- common/autotest_common.sh@955 -- # kill 67454 00:13:47.507 04:28:48 -- common/autotest_common.sh@960 -- # wait 67454 00:13:47.507 04:28:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.507 04:28:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.507 04:28:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.507 04:28:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.507 04:28:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.507 04:28:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.507 04:28:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.507 04:28:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.507 04:28:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:47.507 00:13:47.507 real 1m4.572s 00:13:47.507 user 3m53.565s 00:13:47.507 sys 0m21.560s 00:13:47.507 04:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:47.507 ************************************ 00:13:47.507 END TEST nvmf_initiator_timeout 00:13:47.507 ************************************ 00:13:47.507 04:28:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 04:28:48 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:13:47.507 04:28:48 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:47.507 04:28:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.507 04:28:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 04:28:48 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:47.507 04:28:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.507 04:28:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 04:28:48 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:47.507 04:28:48 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:47.507 04:28:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:47.507 04:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.507 04:28:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 ************************************ 00:13:47.507 START TEST nvmf_identify 00:13:47.507 ************************************ 00:13:47.507 04:28:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:47.507 * Looking for test storage... 00:13:47.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:47.507 04:28:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:47.507 04:28:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:47.507 04:28:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:47.507 04:28:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:47.507 04:28:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:47.507 04:28:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:47.507 04:28:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:47.507 04:28:49 -- scripts/common.sh@335 -- # IFS=.-: 00:13:47.507 04:28:49 -- scripts/common.sh@335 -- # read -ra ver1 00:13:47.507 04:28:49 -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.507 04:28:49 -- scripts/common.sh@336 -- # read -ra ver2 00:13:47.507 04:28:49 -- scripts/common.sh@337 -- # local 'op=<' 00:13:47.507 04:28:49 -- scripts/common.sh@339 -- # ver1_l=2 00:13:47.507 04:28:49 -- scripts/common.sh@340 -- # ver2_l=1 00:13:47.507 04:28:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:47.507 04:28:49 -- scripts/common.sh@343 -- # case "$op" in 00:13:47.507 04:28:49 -- scripts/common.sh@344 -- # : 1 00:13:47.507 04:28:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:47.507 04:28:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.507 04:28:49 -- scripts/common.sh@364 -- # decimal 1 00:13:47.507 04:28:49 -- scripts/common.sh@352 -- # local d=1 00:13:47.507 04:28:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.507 04:28:49 -- scripts/common.sh@354 -- # echo 1 00:13:47.507 04:28:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:47.507 04:28:49 -- scripts/common.sh@365 -- # decimal 2 00:13:47.507 04:28:49 -- scripts/common.sh@352 -- # local d=2 00:13:47.507 04:28:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.507 04:28:49 -- scripts/common.sh@354 -- # echo 2 00:13:47.507 04:28:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:47.507 04:28:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:47.507 04:28:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:47.507 04:28:49 -- scripts/common.sh@367 -- # return 0 00:13:47.507 04:28:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.507 04:28:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.507 --rc genhtml_branch_coverage=1 00:13:47.507 --rc genhtml_function_coverage=1 00:13:47.507 --rc genhtml_legend=1 00:13:47.507 --rc geninfo_all_blocks=1 00:13:47.507 --rc geninfo_unexecuted_blocks=1 00:13:47.507 00:13:47.507 ' 00:13:47.507 04:28:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.507 --rc genhtml_branch_coverage=1 00:13:47.507 --rc genhtml_function_coverage=1 00:13:47.507 --rc genhtml_legend=1 00:13:47.507 --rc geninfo_all_blocks=1 00:13:47.507 --rc geninfo_unexecuted_blocks=1 00:13:47.507 00:13:47.507 ' 00:13:47.508 04:28:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:47.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.508 --rc genhtml_branch_coverage=1 00:13:47.508 --rc genhtml_function_coverage=1 00:13:47.508 --rc genhtml_legend=1 00:13:47.508 --rc geninfo_all_blocks=1 00:13:47.508 --rc geninfo_unexecuted_blocks=1 00:13:47.508 00:13:47.508 ' 00:13:47.508 04:28:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:47.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.508 --rc genhtml_branch_coverage=1 00:13:47.508 --rc genhtml_function_coverage=1 00:13:47.508 --rc genhtml_legend=1 00:13:47.508 --rc geninfo_all_blocks=1 00:13:47.508 --rc geninfo_unexecuted_blocks=1 00:13:47.508 00:13:47.508 ' 00:13:47.508 04:28:49 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:47.508 04:28:49 -- nvmf/common.sh@7 -- # uname -s 00:13:47.508 04:28:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.508 04:28:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.508 04:28:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.508 04:28:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.508 04:28:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.508 04:28:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.508 04:28:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.508 04:28:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.508 04:28:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.508 04:28:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.508 04:28:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:13:47.508 04:28:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:13:47.508 04:28:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.508 04:28:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.508 04:28:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:47.508 04:28:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.508 04:28:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.508 04:28:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.508 04:28:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.508 04:28:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.508 04:28:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.508 04:28:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.508 04:28:49 -- paths/export.sh@5 -- # export PATH 00:13:47.508 04:28:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.508 04:28:49 -- nvmf/common.sh@46 -- # : 0 00:13:47.508 04:28:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:47.508 04:28:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:47.508 04:28:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:47.508 04:28:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.508 04:28:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.508 04:28:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:47.508 04:28:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:47.508 04:28:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:47.508 04:28:49 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.508 04:28:49 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.508 04:28:49 -- host/identify.sh@14 -- # nvmftestinit 00:13:47.508 04:28:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:47.508 04:28:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.508 04:28:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:47.508 04:28:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:47.508 04:28:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:47.508 04:28:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.508 04:28:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.508 04:28:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.508 04:28:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:47.508 04:28:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:47.508 04:28:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:47.508 04:28:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:47.508 04:28:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:47.508 04:28:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:47.508 04:28:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.508 04:28:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.508 04:28:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:47.508 04:28:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:47.508 04:28:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:47.508 04:28:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:47.508 04:28:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:47.508 04:28:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.508 04:28:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:47.508 04:28:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:47.508 04:28:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:47.508 04:28:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:47.508 04:28:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:47.508 04:28:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:47.508 Cannot find device "nvmf_tgt_br" 00:13:47.508 04:28:49 -- nvmf/common.sh@154 -- # true 00:13:47.508 04:28:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:47.508 Cannot find device "nvmf_tgt_br2" 00:13:47.508 04:28:49 -- nvmf/common.sh@155 -- # true 00:13:47.508 04:28:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:47.508 04:28:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:47.508 Cannot find device "nvmf_tgt_br" 00:13:47.508 04:28:49 -- nvmf/common.sh@157 -- # true 00:13:47.508 04:28:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:47.508 Cannot find device "nvmf_tgt_br2" 00:13:47.508 04:28:49 -- nvmf/common.sh@158 -- # true 00:13:47.508 04:28:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:47.508 04:28:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:47.508 04:28:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:47.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.508 04:28:49 -- nvmf/common.sh@161 -- # true 00:13:47.508 04:28:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:47.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.508 04:28:49 -- nvmf/common.sh@162 -- # true 00:13:47.508 04:28:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:47.508 04:28:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:47.508 04:28:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:47.508 04:28:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:47.508 04:28:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:47.508 04:28:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:47.508 04:28:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:47.508 04:28:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:47.508 04:28:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:47.508 04:28:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:47.508 04:28:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:47.508 04:28:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:47.508 04:28:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:47.508 04:28:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.508 04:28:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.508 04:28:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.508 04:28:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:47.508 04:28:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:47.508 04:28:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.508 04:28:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.508 04:28:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.508 04:28:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.508 04:28:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.508 04:28:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:47.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:47.508 00:13:47.508 --- 10.0.0.2 ping statistics --- 00:13:47.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.508 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:47.508 04:28:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:47.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:13:47.508 00:13:47.508 --- 10.0.0.3 ping statistics --- 00:13:47.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.509 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:47.509 04:28:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:47.509 00:13:47.509 --- 10.0.0.1 ping statistics --- 00:13:47.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.509 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:47.509 04:28:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.509 04:28:49 -- nvmf/common.sh@421 -- # return 0 00:13:47.509 04:28:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.509 04:28:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.509 04:28:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:47.509 04:28:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:47.509 04:28:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.509 04:28:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:47.509 04:28:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:47.509 04:28:49 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:47.509 04:28:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.509 04:28:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 04:28:49 -- host/identify.sh@19 -- # nvmfpid=68400 00:13:47.509 04:28:49 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.509 04:28:49 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.509 04:28:49 -- host/identify.sh@23 -- # waitforlisten 68400 00:13:47.509 04:28:49 -- common/autotest_common.sh@829 -- # '[' -z 68400 ']' 00:13:47.509 04:28:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.509 04:28:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.509 04:28:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.509 04:28:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.509 04:28:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 [2024-12-07 04:28:49.556664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:47.509 [2024-12-07 04:28:49.556766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.509 [2024-12-07 04:28:49.692030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.509 [2024-12-07 04:28:49.744584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:47.509 [2024-12-07 04:28:49.744775] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.509 [2024-12-07 04:28:49.744804] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.509 [2024-12-07 04:28:49.744812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.509 [2024-12-07 04:28:49.745301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.509 [2024-12-07 04:28:49.745451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.509 [2024-12-07 04:28:49.745586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.509 [2024-12-07 04:28:49.745694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.509 04:28:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.509 04:28:50 -- common/autotest_common.sh@862 -- # return 0 00:13:47.509 04:28:50 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.509 04:28:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 [2024-12-07 04:28:50.518044] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.509 04:28:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.509 04:28:50 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:47.509 04:28:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 04:28:50 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:47.509 04:28:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 Malloc0 00:13:47.509 04:28:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.509 04:28:50 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.509 04:28:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 04:28:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.509 04:28:50 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:47.509 04:28:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 04:28:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.509 04:28:50 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.509 04:28:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 [2024-12-07 04:28:50.612630] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.509 04:28:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.509 04:28:50 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.509 04:28:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 04:28:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.509 04:28:50 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:47.509 04:28:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.509 04:28:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.509 [2024-12-07 04:28:50.628387] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:47.509 [ 00:13:47.509 { 00:13:47.509 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:47.509 "subtype": "Discovery", 00:13:47.509 "listen_addresses": [ 00:13:47.509 { 00:13:47.509 "transport": "TCP", 00:13:47.509 "trtype": "TCP", 00:13:47.509 "adrfam": "IPv4", 00:13:47.509 "traddr": "10.0.0.2", 00:13:47.509 "trsvcid": "4420" 00:13:47.509 } 00:13:47.509 ], 00:13:47.509 "allow_any_host": true, 00:13:47.509 "hosts": [] 00:13:47.509 }, 00:13:47.509 { 00:13:47.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.509 "subtype": "NVMe", 00:13:47.509 "listen_addresses": [ 00:13:47.509 { 00:13:47.509 "transport": "TCP", 00:13:47.509 "trtype": "TCP", 00:13:47.509 "adrfam": "IPv4", 00:13:47.509 "traddr": "10.0.0.2", 00:13:47.509 "trsvcid": "4420" 00:13:47.509 } 00:13:47.509 ], 00:13:47.509 "allow_any_host": true, 00:13:47.509 "hosts": [], 00:13:47.509 "serial_number": "SPDK00000000000001", 00:13:47.509 "model_number": "SPDK bdev Controller", 00:13:47.509 "max_namespaces": 32, 00:13:47.509 "min_cntlid": 1, 00:13:47.509 "max_cntlid": 65519, 00:13:47.509 "namespaces": [ 00:13:47.509 { 00:13:47.509 "nsid": 1, 00:13:47.509 "bdev_name": "Malloc0", 00:13:47.509 "name": "Malloc0", 00:13:47.509 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:47.509 "eui64": "ABCDEF0123456789", 00:13:47.509 "uuid": "cc3f4bf4-0e03-4c06-958a-ea4db66ea795" 00:13:47.509 } 00:13:47.509 ] 00:13:47.509 } 00:13:47.509 ] 00:13:47.509 04:28:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.509 04:28:50 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:47.509 [2024-12-07 04:28:50.660675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:47.509 [2024-12-07 04:28:50.661088] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68435 ] 00:13:47.772 [2024-12-07 04:28:50.794650] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:47.772 [2024-12-07 04:28:50.794720] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:47.772 [2024-12-07 04:28:50.794727] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:47.772 [2024-12-07 04:28:50.794738] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:47.772 [2024-12-07 04:28:50.794749] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:47.772 [2024-12-07 04:28:50.794863] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:47.772 [2024-12-07 04:28:50.794946] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb62d30 0 00:13:47.772 [2024-12-07 04:28:50.808689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:47.772 [2024-12-07 04:28:50.808711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:47.772 [2024-12-07 04:28:50.808733] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:47.772 [2024-12-07 04:28:50.808737] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:47.772 [2024-12-07 04:28:50.808778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.808784] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.808788] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.772 [2024-12-07 04:28:50.808802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:47.772 [2024-12-07 04:28:50.808832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.772 [2024-12-07 04:28:50.816730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.772 [2024-12-07 04:28:50.816749] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.772 [2024-12-07 04:28:50.816769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.816774] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.772 [2024-12-07 04:28:50.816784] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:47.772 [2024-12-07 04:28:50.816791] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:47.772 [2024-12-07 04:28:50.816797] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:47.772 [2024-12-07 04:28:50.816812] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.816817] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.816820] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.772 [2024-12-07 04:28:50.816829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.772 [2024-12-07 04:28:50.816854] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.772 [2024-12-07 04:28:50.816920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.772 [2024-12-07 04:28:50.816927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.772 [2024-12-07 04:28:50.816930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.816934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.772 [2024-12-07 04:28:50.816939] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:47.772 [2024-12-07 04:28:50.816962] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:47.772 [2024-12-07 04:28:50.816985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.816990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.816993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.772 [2024-12-07 04:28:50.817001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.772 [2024-12-07 04:28:50.817019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.772 [2024-12-07 04:28:50.817067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.772 [2024-12-07 04:28:50.817073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.772 [2024-12-07 04:28:50.817077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.772 [2024-12-07 04:28:50.817094] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:47.772 [2024-12-07 04:28:50.817102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:47.772 [2024-12-07 04:28:50.817109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.772 [2024-12-07 04:28:50.817124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.772 [2024-12-07 04:28:50.817141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.772 [2024-12-07 04:28:50.817195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.772 [2024-12-07 04:28:50.817201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.772 [2024-12-07 04:28:50.817205] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817208] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.772 [2024-12-07 04:28:50.817214] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:47.772 [2024-12-07 04:28:50.817224] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817229] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817232] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.772 [2024-12-07 04:28:50.817240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.772 [2024-12-07 04:28:50.817256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.772 [2024-12-07 04:28:50.817306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.772 [2024-12-07 04:28:50.817312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.772 [2024-12-07 04:28:50.817316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817320] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.772 [2024-12-07 04:28:50.817325] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:47.772 [2024-12-07 04:28:50.817330] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:47.772 [2024-12-07 04:28:50.817338] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:47.772 [2024-12-07 04:28:50.817444] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:47.772 [2024-12-07 04:28:50.817449] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:47.772 [2024-12-07 04:28:50.817458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.772 [2024-12-07 04:28:50.817473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.772 [2024-12-07 04:28:50.817491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.772 [2024-12-07 04:28:50.817547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.772 [2024-12-07 04:28:50.817553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.772 [2024-12-07 04:28:50.817557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.772 [2024-12-07 04:28:50.817561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.773 [2024-12-07 04:28:50.817566] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:47.773 [2024-12-07 04:28:50.817576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.817592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.773 [2024-12-07 04:28:50.817608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.773 [2024-12-07 04:28:50.817659] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.773 [2024-12-07 04:28:50.817665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.773 [2024-12-07 04:28:50.817669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.773 [2024-12-07 04:28:50.817677] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:47.773 [2024-12-07 04:28:50.817683] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:47.773 [2024-12-07 04:28:50.817691] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:47.773 [2024-12-07 04:28:50.817706] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:47.773 [2024-12-07 04:28:50.817729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817739] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.817747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.773 [2024-12-07 04:28:50.817767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.773 [2024-12-07 04:28:50.817857] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.773 [2024-12-07 04:28:50.817864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.773 [2024-12-07 04:28:50.817867] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817871] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb62d30): datao=0, datal=4096, cccid=0 00:13:47.773 [2024-12-07 04:28:50.817876] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc0f30) on tqpair(0xb62d30): expected_datao=0, payload_size=4096 00:13:47.773 [2024-12-07 04:28:50.817885] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817889] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.773 [2024-12-07 04:28:50.817904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.773 [2024-12-07 04:28:50.817907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817911] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.773 [2024-12-07 04:28:50.817920] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:47.773 [2024-12-07 04:28:50.817926] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:47.773 [2024-12-07 04:28:50.817930] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:47.773 [2024-12-07 04:28:50.817935] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:47.773 [2024-12-07 04:28:50.817940] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:47.773 [2024-12-07 04:28:50.817945] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:47.773 [2024-12-07 04:28:50.817958] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:47.773 [2024-12-07 04:28:50.817966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.817974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.817982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:47.773 [2024-12-07 04:28:50.818001] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.773 [2024-12-07 04:28:50.818061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.773 [2024-12-07 04:28:50.818068] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.773 [2024-12-07 04:28:50.818072] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818076] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc0f30) on tqpair=0xb62d30 00:13:47.773 [2024-12-07 04:28:50.818083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.818098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.773 [2024-12-07 04:28:50.818104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.818117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.773 [2024-12-07 04:28:50.818123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818127] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.818136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.773 [2024-12-07 04:28:50.818142] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818146] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.818155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.773 [2024-12-07 04:28:50.818160] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:47.773 [2024-12-07 04:28:50.818173] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:47.773 [2024-12-07 04:28:50.818180] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818184] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818187] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.818194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.773 [2024-12-07 04:28:50.818213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc0f30, cid 0, qid 0 00:13:47.773 [2024-12-07 04:28:50.818220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1090, cid 1, qid 0 00:13:47.773 [2024-12-07 04:28:50.818225] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc11f0, cid 2, qid 0 00:13:47.773 [2024-12-07 04:28:50.818230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.773 [2024-12-07 04:28:50.818234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc14b0, cid 4, qid 0 00:13:47.773 [2024-12-07 04:28:50.818323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.773 [2024-12-07 04:28:50.818329] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.773 [2024-12-07 04:28:50.818333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818337] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc14b0) on tqpair=0xb62d30 00:13:47.773 [2024-12-07 04:28:50.818342] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:47.773 [2024-12-07 04:28:50.818348] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:47.773 [2024-12-07 04:28:50.818359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb62d30) 00:13:47.773 [2024-12-07 04:28:50.818374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.773 [2024-12-07 04:28:50.818391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc14b0, cid 4, qid 0 00:13:47.773 [2024-12-07 04:28:50.818451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.773 [2024-12-07 04:28:50.818457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.773 [2024-12-07 04:28:50.818461] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818465] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb62d30): datao=0, datal=4096, cccid=4 00:13:47.773 [2024-12-07 04:28:50.818470] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc14b0) on tqpair(0xb62d30): expected_datao=0, payload_size=4096 00:13:47.773 [2024-12-07 04:28:50.818477] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818481] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818489] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.773 [2024-12-07 04:28:50.818495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.773 [2024-12-07 04:28:50.818499] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818503] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc14b0) on tqpair=0xb62d30 00:13:47.773 [2024-12-07 04:28:50.818515] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:47.773 [2024-12-07 04:28:50.818539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818545] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.773 [2024-12-07 04:28:50.818549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb62d30) 00:13:47.774 [2024-12-07 04:28:50.818556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.774 [2024-12-07 04:28:50.818564] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb62d30) 00:13:47.774 [2024-12-07 04:28:50.818578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:47.774 [2024-12-07 04:28:50.818601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc14b0, cid 4, qid 0 00:13:47.774 [2024-12-07 04:28:50.818609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1610, cid 5, qid 0 00:13:47.774 [2024-12-07 04:28:50.818750] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.774 [2024-12-07 04:28:50.818759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.774 [2024-12-07 04:28:50.818763] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818766] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb62d30): datao=0, datal=1024, cccid=4 00:13:47.774 [2024-12-07 04:28:50.818771] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc14b0) on tqpair(0xb62d30): expected_datao=0, payload_size=1024 00:13:47.774 [2024-12-07 04:28:50.818779] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818783] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.774 [2024-12-07 04:28:50.818795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.774 [2024-12-07 04:28:50.818799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1610) on tqpair=0xb62d30 00:13:47.774 [2024-12-07 04:28:50.818821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.774 [2024-12-07 04:28:50.818828] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.774 [2024-12-07 04:28:50.818832] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818836] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc14b0) on tqpair=0xb62d30 00:13:47.774 [2024-12-07 04:28:50.818854] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb62d30) 00:13:47.774 [2024-12-07 04:28:50.818872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.774 [2024-12-07 04:28:50.818898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc14b0, cid 4, qid 0 00:13:47.774 [2024-12-07 04:28:50.818972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.774 [2024-12-07 04:28:50.818979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.774 [2024-12-07 04:28:50.818983] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.818987] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb62d30): datao=0, datal=3072, cccid=4 00:13:47.774 [2024-12-07 04:28:50.818992] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc14b0) on tqpair(0xb62d30): expected_datao=0, payload_size=3072 00:13:47.774 [2024-12-07 04:28:50.819000] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.819004] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.819012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.774 [2024-12-07 04:28:50.819018] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.774 [2024-12-07 04:28:50.819022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.819026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc14b0) on tqpair=0xb62d30 00:13:47.774 [2024-12-07 04:28:50.819036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.819055] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.819059] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb62d30) 00:13:47.774 [2024-12-07 04:28:50.819066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.774 [2024-12-07 04:28:50.819088] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc14b0, cid 4, qid 0 00:13:47.774 [2024-12-07 04:28:50.819157] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:47.774 [2024-12-07 04:28:50.819163] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:47.774 [2024-12-07 04:28:50.819167] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.819171] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb62d30): datao=0, datal=8, cccid=4 00:13:47.774 [2024-12-07 04:28:50.819175] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbc14b0) on tqpair(0xb62d30): expected_datao=0, payload_size=8 00:13:47.774 [2024-12-07 04:28:50.819183] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:47.774 [2024-12-07 04:28:50.819186] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:47.774 ===================================================== 00:13:47.774 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:47.774 ===================================================== 00:13:47.774 Controller Capabilities/Features 00:13:47.774 ================================ 00:13:47.774 Vendor ID: 0000 00:13:47.774 Subsystem Vendor ID: 0000 00:13:47.774 Serial Number: .................... 00:13:47.774 Model Number: ........................................ 00:13:47.774 Firmware Version: 24.01.1 00:13:47.774 Recommended Arb Burst: 0 00:13:47.774 IEEE OUI Identifier: 00 00 00 00:13:47.774 Multi-path I/O 00:13:47.774 May have multiple subsystem ports: No 00:13:47.774 May have multiple controllers: No 00:13:47.774 Associated with SR-IOV VF: No 00:13:47.774 Max Data Transfer Size: 131072 00:13:47.774 Max Number of Namespaces: 0 00:13:47.774 Max Number of I/O Queues: 1024 00:13:47.774 NVMe Specification Version (VS): 1.3 00:13:47.774 NVMe Specification Version (Identify): 1.3 00:13:47.774 Maximum Queue Entries: 128 00:13:47.774 Contiguous Queues Required: Yes 00:13:47.774 Arbitration Mechanisms Supported 00:13:47.774 Weighted Round Robin: Not Supported 00:13:47.774 Vendor Specific: Not Supported 00:13:47.774 Reset Timeout: 15000 ms 00:13:47.774 Doorbell Stride: 4 bytes 00:13:47.774 NVM Subsystem Reset: Not Supported 00:13:47.774 Command Sets Supported 00:13:47.774 NVM Command Set: Supported 00:13:47.774 Boot Partition: Not Supported 00:13:47.774 Memory Page Size Minimum: 4096 bytes 00:13:47.774 Memory Page Size Maximum: 4096 bytes 00:13:47.774 Persistent Memory Region: Not Supported 00:13:47.774 Optional Asynchronous Events Supported 00:13:47.774 Namespace Attribute Notices: Not Supported 00:13:47.774 Firmware Activation Notices: Not Supported 00:13:47.774 ANA Change Notices: Not Supported 00:13:47.774 PLE Aggregate Log Change Notices: Not Supported 00:13:47.774 LBA Status Info Alert Notices: Not Supported 00:13:47.774 EGE Aggregate Log Change Notices: Not Supported 00:13:47.774 Normal NVM Subsystem Shutdown event: Not Supported 00:13:47.774 Zone Descriptor Change Notices: Not Supported 00:13:47.774 Discovery Log Change Notices: Supported 00:13:47.774 Controller Attributes 00:13:47.774 128-bit Host Identifier: Not Supported 00:13:47.774 Non-Operational Permissive Mode: Not Supported 00:13:47.774 NVM Sets: Not Supported 00:13:47.774 Read Recovery Levels: Not Supported 00:13:47.774 Endurance Groups: Not Supported 00:13:47.774 Predictable Latency Mode: Not Supported 00:13:47.774 Traffic Based Keep ALive: Not Supported 00:13:47.774 Namespace Granularity: Not Supported 00:13:47.774 SQ Associations: Not Supported 00:13:47.774 UUID List: Not Supported 00:13:47.774 Multi-Domain Subsystem: Not Supported 00:13:47.774 Fixed Capacity Management: Not Supported 00:13:47.774 Variable Capacity Management: Not Supported 00:13:47.774 Delete Endurance Group: Not Supported 00:13:47.774 Delete NVM Set: Not Supported 00:13:47.774 Extended LBA Formats Supported: Not Supported 00:13:47.774 Flexible Data Placement Supported: Not Supported 00:13:47.774 00:13:47.774 Controller Memory Buffer Support 00:13:47.774 ================================ 00:13:47.774 Supported: No 00:13:47.774 00:13:47.774 Persistent Memory Region Support 00:13:47.774 ================================ 00:13:47.774 Supported: No 00:13:47.774 00:13:47.774 Admin Command Set Attributes 00:13:47.774 ============================ 00:13:47.774 Security Send/Receive: Not Supported 00:13:47.774 Format NVM: Not Supported 00:13:47.774 Firmware Activate/Download: Not Supported 00:13:47.774 Namespace Management: Not Supported 00:13:47.774 Device Self-Test: Not Supported 00:13:47.774 Directives: Not Supported 00:13:47.774 NVMe-MI: Not Supported 00:13:47.774 Virtualization Management: Not Supported 00:13:47.774 Doorbell Buffer Config: Not Supported 00:13:47.774 Get LBA Status Capability: Not Supported 00:13:47.774 Command & Feature Lockdown Capability: Not Supported 00:13:47.774 Abort Command Limit: 1 00:13:47.774 Async Event Request Limit: 4 00:13:47.774 Number of Firmware Slots: N/A 00:13:47.774 Firmware Slot 1 Read-Only: N/A 00:13:47.774 Firmware Activation Without Reset: N/A 00:13:47.774 Multiple Update Detection Support: N/A 00:13:47.774 Firmware Update Granularity: No Information Provided 00:13:47.774 Per-Namespace SMART Log: No 00:13:47.774 Asymmetric Namespace Access Log Page: Not Supported 00:13:47.774 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:47.775 Command Effects Log Page: Not Supported 00:13:47.775 Get Log Page Extended Data: Supported 00:13:47.775 Telemetry Log Pages: Not Supported 00:13:47.775 Persistent Event Log Pages: Not Supported 00:13:47.775 Supported Log Pages Log Page: May Support 00:13:47.775 Commands Supported & Effects Log Page: Not Supported 00:13:47.775 Feature Identifiers & Effects Log Page:May Support 00:13:47.775 NVMe-MI Commands & Effects Log Page: May Support 00:13:47.775 Data Area 4 for Telemetry Log: Not Supported 00:13:47.775 Error Log Page Entries Supported: 128 00:13:47.775 Keep Alive: Not Supported 00:13:47.775 00:13:47.775 NVM Command Set Attributes 00:13:47.775 ========================== 00:13:47.775 Submission Queue Entry Size 00:13:47.775 Max: 1 00:13:47.775 Min: 1 00:13:47.775 Completion Queue Entry Size 00:13:47.775 Max: 1 00:13:47.775 Min: 1 00:13:47.775 Number of Namespaces: 0 00:13:47.775 Compare Command: Not Supported 00:13:47.775 Write Uncorrectable Command: Not Supported 00:13:47.775 Dataset Management Command: Not Supported 00:13:47.775 Write Zeroes Command: Not Supported 00:13:47.775 Set Features Save Field: Not Supported 00:13:47.775 Reservations: Not Supported 00:13:47.775 Timestamp: Not Supported 00:13:47.775 Copy: Not Supported 00:13:47.775 Volatile Write Cache: Not Present 00:13:47.775 Atomic Write Unit (Normal): 1 00:13:47.775 Atomic Write Unit (PFail): 1 00:13:47.775 Atomic Compare & Write Unit: 1 00:13:47.775 Fused Compare & Write: Supported 00:13:47.775 Scatter-Gather List 00:13:47.775 SGL Command Set: Supported 00:13:47.775 SGL Keyed: Supported 00:13:47.775 SGL Bit Bucket Descriptor: Not Supported 00:13:47.775 SGL Metadata Pointer: Not Supported 00:13:47.775 Oversized SGL: Not Supported 00:13:47.775 SGL Metadata Address: Not Supported 00:13:47.775 SGL Offset: Supported 00:13:47.775 Transport SGL Data Block: Not Supported 00:13:47.775 Replay Protected Memory Block: Not Supported 00:13:47.775 00:13:47.775 Firmware Slot Information 00:13:47.775 ========================= 00:13:47.775 Active slot: 0 00:13:47.775 00:13:47.775 00:13:47.775 Error Log 00:13:47.775 ========= 00:13:47.775 00:13:47.775 Active Namespaces 00:13:47.775 ================= 00:13:47.775 Discovery Log Page 00:13:47.775 ================== 00:13:47.775 Generation Counter: 2 00:13:47.775 Number of Records: 2 00:13:47.775 Record Format: 0 00:13:47.775 00:13:47.775 Discovery Log Entry 0 00:13:47.775 ---------------------- 00:13:47.775 Transport Type: 3 (TCP) 00:13:47.775 Address Family: 1 (IPv4) 00:13:47.775 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:47.775 Entry Flags: 00:13:47.775 Duplicate Returned Information: 1 00:13:47.775 Explicit Persistent Connection Support for Discovery: 1 00:13:47.775 Transport Requirements: 00:13:47.775 Secure Channel: Not Required 00:13:47.775 Port ID: 0 (0x0000) 00:13:47.775 Controller ID: 65535 (0xffff) 00:13:47.775 Admin Max SQ Size: 128 00:13:47.775 Transport Service Identifier: 4420 00:13:47.775 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:47.775 Transport Address: 10.0.0.2 00:13:47.775 Discovery Log Entry 1 00:13:47.775 ---------------------- 00:13:47.775 Transport Type: 3 (TCP) 00:13:47.775 Address Family: 1 (IPv4) 00:13:47.775 Subsystem Type: 2 (NVM Subsystem) 00:13:47.775 Entry Flags: 00:13:47.775 Duplicate Returned Information: 0 00:13:47.775 Explicit Persistent Connection Support for Discovery: 0 00:13:47.775 Transport Requirements: 00:13:47.775 Secure Channel: Not Required 00:13:47.775 Port ID: 0 (0x0000) 00:13:47.775 Controller ID: 65535 (0xffff) 00:13:47.775 Admin Max SQ Size: 128 00:13:47.775 Transport Service Identifier: 4420 00:13:47.775 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:47.775 Transport Address: 10.0.0.2 [2024-12-07 04:28:50.819200] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.775 [2024-12-07 04:28:50.819207] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.775 [2024-12-07 04:28:50.819211] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819215] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc14b0) on tqpair=0xb62d30 00:13:47.775 [2024-12-07 04:28:50.819307] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:47.775 [2024-12-07 04:28:50.819322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.775 [2024-12-07 04:28:50.819329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.775 [2024-12-07 04:28:50.819336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.775 [2024-12-07 04:28:50.819342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.775 [2024-12-07 04:28:50.819351] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819355] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.775 [2024-12-07 04:28:50.819366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.775 [2024-12-07 04:28:50.819417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.775 [2024-12-07 04:28:50.819471] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.775 [2024-12-07 04:28:50.819478] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.775 [2024-12-07 04:28:50.819482] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819486] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.775 [2024-12-07 04:28:50.819495] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819499] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819504] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.775 [2024-12-07 04:28:50.819511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.775 [2024-12-07 04:28:50.819532] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.775 [2024-12-07 04:28:50.819608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.775 [2024-12-07 04:28:50.819615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.775 [2024-12-07 04:28:50.819619] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.775 [2024-12-07 04:28:50.819629] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:47.775 [2024-12-07 04:28:50.819634] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:47.775 [2024-12-07 04:28:50.819644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.775 [2024-12-07 04:28:50.819676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.775 [2024-12-07 04:28:50.819697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.775 [2024-12-07 04:28:50.819778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.775 [2024-12-07 04:28:50.819785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.775 [2024-12-07 04:28:50.819788] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.775 [2024-12-07 04:28:50.819803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.775 [2024-12-07 04:28:50.819819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.775 [2024-12-07 04:28:50.819835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.775 [2024-12-07 04:28:50.819887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.775 [2024-12-07 04:28:50.819893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.775 [2024-12-07 04:28:50.819897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819901] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.775 [2024-12-07 04:28:50.819921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.819929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.775 [2024-12-07 04:28:50.819947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.775 [2024-12-07 04:28:50.819963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.775 [2024-12-07 04:28:50.820011] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.775 [2024-12-07 04:28:50.820017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.775 [2024-12-07 04:28:50.820021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.775 [2024-12-07 04:28:50.820025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.820035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.820050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.820065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.820108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.820114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.820118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.820132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820140] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.820147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.820163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.820208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.820215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.820218] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820222] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.820232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820236] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.820247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.820263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.820314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.820320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.820324] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820328] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.820338] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820342] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820346] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.820353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.820369] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.820417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.820423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.820427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.820441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820449] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.820456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.820472] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.820523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.820529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.820533] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.820547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.820562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.820578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.820627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.820633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.820636] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.820650] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820654] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.820658] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.820665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.820681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.824729] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.824748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.824752] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.824756] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.824770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.824775] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.824778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb62d30) 00:13:47.776 [2024-12-07 04:28:50.824786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.776 [2024-12-07 04:28:50.824808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbc1350, cid 3, qid 0 00:13:47.776 [2024-12-07 04:28:50.824858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:47.776 [2024-12-07 04:28:50.824864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:47.776 [2024-12-07 04:28:50.824868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:47.776 [2024-12-07 04:28:50.824871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbc1350) on tqpair=0xb62d30 00:13:47.776 [2024-12-07 04:28:50.824879] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:13:47.776 00:13:47.776 04:28:50 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:47.776 [2024-12-07 04:28:50.862256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:47.776 [2024-12-07 04:28:50.862479] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68437 ] 00:13:47.776 [2024-12-07 04:28:50.999397] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:47.776 [2024-12-07 04:28:50.999485] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:47.776 [2024-12-07 04:28:50.999494] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:47.776 [2024-12-07 04:28:50.999505] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:47.776 [2024-12-07 04:28:50.999517] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:47.776 [2024-12-07 04:28:50.999623] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:47.776 [2024-12-07 04:28:50.999702] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c9dd30 0 00:13:47.776 [2024-12-07 04:28:51.004775] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:47.776 [2024-12-07 04:28:51.004797] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:47.777 [2024-12-07 04:28:51.004819] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:47.777 [2024-12-07 04:28:51.004823] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:47.777 [2024-12-07 04:28:51.004863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:47.777 [2024-12-07 04:28:51.004871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:47.777 [2024-12-07 04:28:51.004875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:47.777 [2024-12-07 04:28:51.004886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:47.777 [2024-12-07 04:28:51.004914] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.012713] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.042 [2024-12-07 04:28:51.012744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.042 [2024-12-07 04:28:51.012766] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.012771] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.042 [2024-12-07 04:28:51.012785] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:48.042 [2024-12-07 04:28:51.012793] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:48.042 [2024-12-07 04:28:51.012799] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:48.042 [2024-12-07 04:28:51.012815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.012820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.012824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.042 [2024-12-07 04:28:51.012832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.042 [2024-12-07 04:28:51.012858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.012908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.042 [2024-12-07 04:28:51.012915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.042 [2024-12-07 04:28:51.012919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.012923] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.042 [2024-12-07 04:28:51.012929] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:48.042 [2024-12-07 04:28:51.012936] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:48.042 [2024-12-07 04:28:51.012943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.012947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.012951] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.042 [2024-12-07 04:28:51.012973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.042 [2024-12-07 04:28:51.013007] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.013069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.042 [2024-12-07 04:28:51.013077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.042 [2024-12-07 04:28:51.013080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.042 [2024-12-07 04:28:51.013091] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:48.042 [2024-12-07 04:28:51.013100] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:48.042 [2024-12-07 04:28:51.013107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013115] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.042 [2024-12-07 04:28:51.013123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.042 [2024-12-07 04:28:51.013140] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.013186] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.042 [2024-12-07 04:28:51.013193] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.042 [2024-12-07 04:28:51.013198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.042 [2024-12-07 04:28:51.013209] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:48.042 [2024-12-07 04:28:51.013220] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.042 [2024-12-07 04:28:51.013236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.042 [2024-12-07 04:28:51.013253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.013305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.042 [2024-12-07 04:28:51.013312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.042 [2024-12-07 04:28:51.013316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013320] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.042 [2024-12-07 04:28:51.013326] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:48.042 [2024-12-07 04:28:51.013332] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:48.042 [2024-12-07 04:28:51.013340] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:48.042 [2024-12-07 04:28:51.013446] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:48.042 [2024-12-07 04:28:51.013460] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:48.042 [2024-12-07 04:28:51.013470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013475] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.042 [2024-12-07 04:28:51.013487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.042 [2024-12-07 04:28:51.013508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.013559] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.042 [2024-12-07 04:28:51.013566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.042 [2024-12-07 04:28:51.013571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.042 [2024-12-07 04:28:51.013581] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:48.042 [2024-12-07 04:28:51.013592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.042 [2024-12-07 04:28:51.013609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.042 [2024-12-07 04:28:51.013627] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.013681] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.042 [2024-12-07 04:28:51.013690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.042 [2024-12-07 04:28:51.013694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.042 [2024-12-07 04:28:51.013705] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:48.042 [2024-12-07 04:28:51.013710] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:48.042 [2024-12-07 04:28:51.013719] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:48.042 [2024-12-07 04:28:51.013735] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:48.042 [2024-12-07 04:28:51.013746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013751] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.042 [2024-12-07 04:28:51.013755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.042 [2024-12-07 04:28:51.013764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.042 [2024-12-07 04:28:51.013785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.042 [2024-12-07 04:28:51.013886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.042 [2024-12-07 04:28:51.013893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.043 [2024-12-07 04:28:51.013898] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.013902] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=4096, cccid=0 00:13:48.043 [2024-12-07 04:28:51.013907] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfbf30) on tqpair(0x1c9dd30): expected_datao=0, payload_size=4096 00:13:48.043 [2024-12-07 04:28:51.013916] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.013921] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.013930] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.043 [2024-12-07 04:28:51.013938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.043 [2024-12-07 04:28:51.013942] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.013946] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.043 [2024-12-07 04:28:51.013956] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:48.043 [2024-12-07 04:28:51.013962] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:48.043 [2024-12-07 04:28:51.013967] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:48.043 [2024-12-07 04:28:51.013972] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:48.043 [2024-12-07 04:28:51.013977] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:48.043 [2024-12-07 04:28:51.013982] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.013997] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014014] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.043 [2024-12-07 04:28:51.014043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.043 [2024-12-07 04:28:51.014120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.043 [2024-12-07 04:28:51.014127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.043 [2024-12-07 04:28:51.014131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfbf30) on tqpair=0x1c9dd30 00:13:48.043 [2024-12-07 04:28:51.014144] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.043 [2024-12-07 04:28:51.014170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.043 [2024-12-07 04:28:51.014190] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.043 [2024-12-07 04:28:51.014210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.043 [2024-12-07 04:28:51.014229] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014243] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014250] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.043 [2024-12-07 04:28:51.014286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfbf30, cid 0, qid 0 00:13:48.043 [2024-12-07 04:28:51.014293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc090, cid 1, qid 0 00:13:48.043 [2024-12-07 04:28:51.014298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc1f0, cid 2, qid 0 00:13:48.043 [2024-12-07 04:28:51.014303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.043 [2024-12-07 04:28:51.014308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc4b0, cid 4, qid 0 00:13:48.043 [2024-12-07 04:28:51.014400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.043 [2024-12-07 04:28:51.014407] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.043 [2024-12-07 04:28:51.014411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc4b0) on tqpair=0x1c9dd30 00:13:48.043 [2024-12-07 04:28:51.014422] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:48.043 [2024-12-07 04:28:51.014428] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014436] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014447] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014463] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:48.043 [2024-12-07 04:28:51.014490] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc4b0, cid 4, qid 0 00:13:48.043 [2024-12-07 04:28:51.014548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.043 [2024-12-07 04:28:51.014557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.043 [2024-12-07 04:28:51.014561] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc4b0) on tqpair=0x1c9dd30 00:13:48.043 [2024-12-07 04:28:51.014628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014640] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:48.043 [2024-12-07 04:28:51.014649] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014672] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9dd30) 00:13:48.043 [2024-12-07 04:28:51.014680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.043 [2024-12-07 04:28:51.014701] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc4b0, cid 4, qid 0 00:13:48.043 [2024-12-07 04:28:51.014767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.043 [2024-12-07 04:28:51.014784] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.043 [2024-12-07 04:28:51.014789] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014793] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=4096, cccid=4 00:13:48.043 [2024-12-07 04:28:51.014798] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfc4b0) on tqpair(0x1c9dd30): expected_datao=0, payload_size=4096 00:13:48.043 [2024-12-07 04:28:51.014807] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014811] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.043 [2024-12-07 04:28:51.014827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.043 [2024-12-07 04:28:51.014831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.043 [2024-12-07 04:28:51.014836] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc4b0) on tqpair=0x1c9dd30 00:13:48.043 [2024-12-07 04:28:51.014853] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:48.044 [2024-12-07 04:28:51.014864] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.014876] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.014884] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.014889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.014893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9dd30) 00:13:48.044 [2024-12-07 04:28:51.014900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.044 [2024-12-07 04:28:51.014922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc4b0, cid 4, qid 0 00:13:48.044 [2024-12-07 04:28:51.014999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.044 [2024-12-07 04:28:51.015006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.044 [2024-12-07 04:28:51.015010] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015014] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=4096, cccid=4 00:13:48.044 [2024-12-07 04:28:51.015019] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfc4b0) on tqpair(0x1c9dd30): expected_datao=0, payload_size=4096 00:13:48.044 [2024-12-07 04:28:51.015027] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015031] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.044 [2024-12-07 04:28:51.015046] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.044 [2024-12-07 04:28:51.015050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015054] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc4b0) on tqpair=0x1c9dd30 00:13:48.044 [2024-12-07 04:28:51.015071] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015083] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9dd30) 00:13:48.044 [2024-12-07 04:28:51.015109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.044 [2024-12-07 04:28:51.015128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc4b0, cid 4, qid 0 00:13:48.044 [2024-12-07 04:28:51.015193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.044 [2024-12-07 04:28:51.015200] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.044 [2024-12-07 04:28:51.015204] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015208] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=4096, cccid=4 00:13:48.044 [2024-12-07 04:28:51.015213] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfc4b0) on tqpair(0x1c9dd30): expected_datao=0, payload_size=4096 00:13:48.044 [2024-12-07 04:28:51.015221] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015225] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.044 [2024-12-07 04:28:51.015240] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.044 [2024-12-07 04:28:51.015244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc4b0) on tqpair=0x1c9dd30 00:13:48.044 [2024-12-07 04:28:51.015258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015268] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015282] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015289] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015295] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015300] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:48.044 [2024-12-07 04:28:51.015305] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:48.044 [2024-12-07 04:28:51.015311] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:48.044 [2024-12-07 04:28:51.015326] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015331] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015335] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9dd30) 00:13:48.044 [2024-12-07 04:28:51.015343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.044 [2024-12-07 04:28:51.015350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015358] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9dd30) 00:13:48.044 [2024-12-07 04:28:51.015365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.044 [2024-12-07 04:28:51.015419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc4b0, cid 4, qid 0 00:13:48.044 [2024-12-07 04:28:51.015428] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc610, cid 5, qid 0 00:13:48.044 [2024-12-07 04:28:51.015502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.044 [2024-12-07 04:28:51.015511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.044 [2024-12-07 04:28:51.015516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc4b0) on tqpair=0x1c9dd30 00:13:48.044 [2024-12-07 04:28:51.015529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.044 [2024-12-07 04:28:51.015535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.044 [2024-12-07 04:28:51.015539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc610) on tqpair=0x1c9dd30 00:13:48.044 [2024-12-07 04:28:51.015556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015561] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015565] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9dd30) 00:13:48.044 [2024-12-07 04:28:51.015572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.044 [2024-12-07 04:28:51.015591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc610, cid 5, qid 0 00:13:48.044 [2024-12-07 04:28:51.015643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.044 [2024-12-07 04:28:51.015650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.044 [2024-12-07 04:28:51.015667] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc610) on tqpair=0x1c9dd30 00:13:48.044 [2024-12-07 04:28:51.015687] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015691] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9dd30) 00:13:48.044 [2024-12-07 04:28:51.015703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.044 [2024-12-07 04:28:51.015723] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc610, cid 5, qid 0 00:13:48.044 [2024-12-07 04:28:51.015798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.044 [2024-12-07 04:28:51.015805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.044 [2024-12-07 04:28:51.015809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc610) on tqpair=0x1c9dd30 00:13:48.044 [2024-12-07 04:28:51.015825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015829] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9dd30) 00:13:48.044 [2024-12-07 04:28:51.015841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.044 [2024-12-07 04:28:51.015858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc610, cid 5, qid 0 00:13:48.044 [2024-12-07 04:28:51.015912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.044 [2024-12-07 04:28:51.015919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.044 [2024-12-07 04:28:51.015924] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015928] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc610) on tqpair=0x1c9dd30 00:13:48.044 [2024-12-07 04:28:51.015942] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.044 [2024-12-07 04:28:51.015948] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.015952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c9dd30) 00:13:48.045 [2024-12-07 04:28:51.015959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.045 [2024-12-07 04:28:51.015967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.015971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.015975] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c9dd30) 00:13:48.045 [2024-12-07 04:28:51.015982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.045 [2024-12-07 04:28:51.015990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.015994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.015999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c9dd30) 00:13:48.045 [2024-12-07 04:28:51.016005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.045 [2024-12-07 04:28:51.016013] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016017] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c9dd30) 00:13:48.045 [2024-12-07 04:28:51.016028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.045 [2024-12-07 04:28:51.016047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc610, cid 5, qid 0 00:13:48.045 [2024-12-07 04:28:51.016054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc4b0, cid 4, qid 0 00:13:48.045 [2024-12-07 04:28:51.016059] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc770, cid 6, qid 0 00:13:48.045 [2024-12-07 04:28:51.016064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc8d0, cid 7, qid 0 00:13:48.045 [2024-12-07 04:28:51.016194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.045 [2024-12-07 04:28:51.016202] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.045 [2024-12-07 04:28:51.016206] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016210] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=8192, cccid=5 00:13:48.045 [2024-12-07 04:28:51.016215] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfc610) on tqpair(0x1c9dd30): expected_datao=0, payload_size=8192 00:13:48.045 [2024-12-07 04:28:51.016233] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016238] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016245] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.045 [2024-12-07 04:28:51.016251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.045 [2024-12-07 04:28:51.016255] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016259] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=512, cccid=4 00:13:48.045 [2024-12-07 04:28:51.016263] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfc4b0) on tqpair(0x1c9dd30): expected_datao=0, payload_size=512 00:13:48.045 [2024-12-07 04:28:51.016271] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016275] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016281] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.045 [2024-12-07 04:28:51.016287] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.045 [2024-12-07 04:28:51.016291] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016294] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=512, cccid=6 00:13:48.045 [2024-12-07 04:28:51.016299] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfc770) on tqpair(0x1c9dd30): expected_datao=0, payload_size=512 00:13:48.045 [2024-12-07 04:28:51.016306] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016310] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016316] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:48.045 [2024-12-07 04:28:51.016322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:48.045 [2024-12-07 04:28:51.016326] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016330] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c9dd30): datao=0, datal=4096, cccid=7 00:13:48.045 [2024-12-07 04:28:51.016334] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cfc8d0) on tqpair(0x1c9dd30): expected_datao=0, payload_size=4096 00:13:48.045 [2024-12-07 04:28:51.016342] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016346] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.045 [2024-12-07 04:28:51.016361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.045 [2024-12-07 04:28:51.016365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc610) on tqpair=0x1c9dd30 00:13:48.045 [2024-12-07 04:28:51.016387] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.045 [2024-12-07 04:28:51.016394] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.045 [2024-12-07 04:28:51.016398] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016402] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc4b0) on tqpair=0x1c9dd30 00:13:48.045 [2024-12-07 04:28:51.016413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.045 [2024-12-07 04:28:51.016420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.045 [2024-12-07 04:28:51.016424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.045 [2024-12-07 04:28:51.016428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc770) on tqpair=0x1c9dd30 00:13:48.045 [2024-12-07 04:28:51.016437] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.045 [2024-12-07 04:28:51.016443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.045 [2024-12-07 04:28:51.016446] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.045 ===================================================== 00:13:48.045 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.045 ===================================================== 00:13:48.045 Controller Capabilities/Features 00:13:48.045 ================================ 00:13:48.045 Vendor ID: 8086 00:13:48.045 Subsystem Vendor ID: 8086 00:13:48.045 Serial Number: SPDK00000000000001 00:13:48.045 Model Number: SPDK bdev Controller 00:13:48.045 Firmware Version: 24.01.1 00:13:48.045 Recommended Arb Burst: 6 00:13:48.045 IEEE OUI Identifier: e4 d2 5c 00:13:48.045 Multi-path I/O 00:13:48.045 May have multiple subsystem ports: Yes 00:13:48.045 May have multiple controllers: Yes 00:13:48.045 Associated with SR-IOV VF: No 00:13:48.045 Max Data Transfer Size: 131072 00:13:48.045 Max Number of Namespaces: 32 00:13:48.045 Max Number of I/O Queues: 127 00:13:48.045 NVMe Specification Version (VS): 1.3 00:13:48.045 NVMe Specification Version (Identify): 1.3 00:13:48.045 Maximum Queue Entries: 128 00:13:48.045 Contiguous Queues Required: Yes 00:13:48.045 Arbitration Mechanisms Supported 00:13:48.045 Weighted Round Robin: Not Supported 00:13:48.045 Vendor Specific: Not Supported 00:13:48.045 Reset Timeout: 15000 ms 00:13:48.045 Doorbell Stride: 4 bytes 00:13:48.045 NVM Subsystem Reset: Not Supported 00:13:48.045 Command Sets Supported 00:13:48.045 NVM Command Set: Supported 00:13:48.045 Boot Partition: Not Supported 00:13:48.045 Memory Page Size Minimum: 4096 bytes 00:13:48.045 Memory Page Size Maximum: 4096 bytes 00:13:48.045 Persistent Memory Region: Not Supported 00:13:48.045 Optional Asynchronous Events Supported 00:13:48.045 Namespace Attribute Notices: Supported 00:13:48.045 Firmware Activation Notices: Not Supported 00:13:48.045 ANA Change Notices: Not Supported 00:13:48.045 PLE Aggregate Log Change Notices: Not Supported 00:13:48.045 LBA Status Info Alert Notices: Not Supported 00:13:48.045 EGE Aggregate Log Change Notices: Not Supported 00:13:48.045 Normal NVM Subsystem Shutdown event: Not Supported 00:13:48.045 Zone Descriptor Change Notices: Not Supported 00:13:48.045 Discovery Log Change Notices: Not Supported 00:13:48.045 Controller Attributes 00:13:48.045 128-bit Host Identifier: Supported 00:13:48.045 Non-Operational Permissive Mode: Not Supported 00:13:48.045 NVM Sets: Not Supported 00:13:48.045 Read Recovery Levels: Not Supported 00:13:48.045 Endurance Groups: Not Supported 00:13:48.045 Predictable Latency Mode: Not Supported 00:13:48.045 Traffic Based Keep ALive: Not Supported 00:13:48.045 Namespace Granularity: Not Supported 00:13:48.045 SQ Associations: Not Supported 00:13:48.046 UUID List: Not Supported 00:13:48.046 Multi-Domain Subsystem: Not Supported 00:13:48.046 Fixed Capacity Management: Not Supported 00:13:48.046 Variable Capacity Management: Not Supported 00:13:48.046 Delete Endurance Group: Not Supported 00:13:48.046 Delete NVM Set: Not Supported 00:13:48.046 Extended LBA Formats Supported: Not Supported 00:13:48.046 Flexible Data Placement Supported: Not Supported 00:13:48.046 00:13:48.046 Controller Memory Buffer Support 00:13:48.046 ================================ 00:13:48.046 Supported: No 00:13:48.046 00:13:48.046 Persistent Memory Region Support 00:13:48.046 ================================ 00:13:48.046 Supported: No 00:13:48.046 00:13:48.046 Admin Command Set Attributes 00:13:48.046 ============================ 00:13:48.046 Security Send/Receive: Not Supported 00:13:48.046 Format NVM: Not Supported 00:13:48.046 Firmware Activate/Download: Not Supported 00:13:48.046 Namespace Management: Not Supported 00:13:48.046 Device Self-Test: Not Supported 00:13:48.046 Directives: Not Supported 00:13:48.046 NVMe-MI: Not Supported 00:13:48.046 Virtualization Management: Not Supported 00:13:48.046 Doorbell Buffer Config: Not Supported 00:13:48.046 Get LBA Status Capability: Not Supported 00:13:48.046 Command & Feature Lockdown Capability: Not Supported 00:13:48.046 Abort Command Limit: 4 00:13:48.046 Async Event Request Limit: 4 00:13:48.046 Number of Firmware Slots: N/A 00:13:48.046 Firmware Slot 1 Read-Only: N/A 00:13:48.046 Firmware Activation Without Reset: N/A 00:13:48.046 Multiple Update Detection Support: N/A 00:13:48.046 Firmware Update Granularity: No Information Provided 00:13:48.046 Per-Namespace SMART Log: No 00:13:48.046 Asymmetric Namespace Access Log Page: Not Supported 00:13:48.046 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:48.046 Command Effects Log Page: Supported 00:13:48.046 Get Log Page Extended Data: Supported 00:13:48.046 Telemetry Log Pages: Not Supported 00:13:48.046 Persistent Event Log Pages: Not Supported 00:13:48.046 Supported Log Pages Log Page: May Support 00:13:48.046 Commands Supported & Effects Log Page: Not Supported 00:13:48.046 Feature Identifiers & Effects Log Page:May Support 00:13:48.046 NVMe-MI Commands & Effects Log Page: May Support 00:13:48.046 Data Area 4 for Telemetry Log: Not Supported 00:13:48.046 Error Log Page Entries Supported: 128 00:13:48.046 Keep Alive: Supported 00:13:48.046 Keep Alive Granularity: 10000 ms 00:13:48.046 00:13:48.046 NVM Command Set Attributes 00:13:48.046 ========================== 00:13:48.046 Submission Queue Entry Size 00:13:48.046 Max: 64 00:13:48.046 Min: 64 00:13:48.046 Completion Queue Entry Size 00:13:48.046 Max: 16 00:13:48.046 Min: 16 00:13:48.046 Number of Namespaces: 32 00:13:48.046 Compare Command: Supported 00:13:48.046 Write Uncorrectable Command: Not Supported 00:13:48.046 Dataset Management Command: Supported 00:13:48.046 Write Zeroes Command: Supported 00:13:48.046 Set Features Save Field: Not Supported 00:13:48.046 Reservations: Supported 00:13:48.046 Timestamp: Not Supported 00:13:48.046 Copy: Supported 00:13:48.046 Volatile Write Cache: Present 00:13:48.046 Atomic Write Unit (Normal): 1 00:13:48.046 Atomic Write Unit (PFail): 1 00:13:48.046 Atomic Compare & Write Unit: 1 00:13:48.046 Fused Compare & Write: Supported 00:13:48.046 Scatter-Gather List 00:13:48.046 SGL Command Set: Supported 00:13:48.046 SGL Keyed: Supported 00:13:48.046 SGL Bit Bucket Descriptor: Not Supported 00:13:48.046 SGL Metadata Pointer: Not Supported 00:13:48.046 Oversized SGL: Not Supported 00:13:48.046 SGL Metadata Address: Not Supported 00:13:48.046 SGL Offset: Supported 00:13:48.046 Transport SGL Data Block: Not Supported 00:13:48.046 Replay Protected Memory Block: Not Supported 00:13:48.046 00:13:48.046 Firmware Slot Information 00:13:48.046 ========================= 00:13:48.046 Active slot: 1 00:13:48.046 Slot 1 Firmware Revision: 24.01.1 00:13:48.046 00:13:48.046 00:13:48.046 Commands Supported and Effects 00:13:48.046 ============================== 00:13:48.046 Admin Commands 00:13:48.046 -------------- 00:13:48.046 Get Log Page (02h): Supported 00:13:48.046 Identify (06h): Supported 00:13:48.046 Abort (08h): Supported 00:13:48.046 Set Features (09h): Supported 00:13:48.046 Get Features (0Ah): Supported 00:13:48.046 Asynchronous Event Request (0Ch): Supported 00:13:48.046 Keep Alive (18h): Supported 00:13:48.046 I/O Commands 00:13:48.046 ------------ 00:13:48.046 Flush (00h): Supported LBA-Change 00:13:48.046 Write (01h): Supported LBA-Change 00:13:48.046 Read (02h): Supported 00:13:48.046 Compare (05h): Supported 00:13:48.046 Write Zeroes (08h): Supported LBA-Change 00:13:48.046 Dataset Management (09h): Supported LBA-Change 00:13:48.046 Copy (19h): Supported LBA-Change 00:13:48.046 Unknown (79h): Supported LBA-Change 00:13:48.046 Unknown (7Ah): Supported 00:13:48.046 00:13:48.046 Error Log 00:13:48.046 ========= 00:13:48.046 00:13:48.046 Arbitration 00:13:48.046 =========== 00:13:48.046 Arbitration Burst: 1 00:13:48.046 00:13:48.046 Power Management 00:13:48.046 ================ 00:13:48.046 Number of Power States: 1 00:13:48.046 Current Power State: Power State #0 00:13:48.046 Power State #0: 00:13:48.046 Max Power: 0.00 W 00:13:48.046 Non-Operational State: Operational 00:13:48.046 Entry Latency: Not Reported 00:13:48.046 Exit Latency: Not Reported 00:13:48.046 Relative Read Throughput: 0 00:13:48.046 Relative Read Latency: 0 00:13:48.046 Relative Write Throughput: 0 00:13:48.046 Relative Write Latency: 0 00:13:48.046 Idle Power: Not Reported 00:13:48.046 Active Power: Not Reported 00:13:48.046 Non-Operational Permissive Mode: Not Supported 00:13:48.046 00:13:48.046 Health Information 00:13:48.046 ================== 00:13:48.046 Critical Warnings: 00:13:48.046 Available Spare Space: OK 00:13:48.046 Temperature: OK 00:13:48.046 Device Reliability: OK 00:13:48.046 Read Only: No 00:13:48.046 Volatile Memory Backup: OK 00:13:48.046 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:48.046 Temperature Threshold: [2024-12-07 04:28:51.016450] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc8d0) on tqpair=0x1c9dd30 00:13:48.046 [2024-12-07 04:28:51.016558] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.046 [2024-12-07 04:28:51.016565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.046 [2024-12-07 04:28:51.016569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c9dd30) 00:13:48.046 [2024-12-07 04:28:51.016577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.046 [2024-12-07 04:28:51.016600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc8d0, cid 7, qid 0 00:13:48.046 [2024-12-07 04:28:51.020707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.046 [2024-12-07 04:28:51.020729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.046 [2024-12-07 04:28:51.020750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.046 [2024-12-07 04:28:51.020755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc8d0) on tqpair=0x1c9dd30 00:13:48.046 [2024-12-07 04:28:51.020796] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:48.046 [2024-12-07 04:28:51.020826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.046 [2024-12-07 04:28:51.020833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.047 [2024-12-07 04:28:51.020840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.047 [2024-12-07 04:28:51.020846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.047 [2024-12-07 04:28:51.020855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.020860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.020864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.020872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.020898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.020954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.020961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.020965] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.020969] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.020978] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.020982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021001] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.021009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.021030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.021103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.021109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.021113] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021117] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.021124] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:48.047 [2024-12-07 04:28:51.021129] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:48.047 [2024-12-07 04:28:51.021139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.021154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.021171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.021223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.021230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.021234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.021249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021254] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.021265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.021282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.021336] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.021343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.021347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.021362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.021378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.021394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.021448] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.021455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.021459] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021463] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.021474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.021490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.021506] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.021560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.021567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.021571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.021586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021590] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021594] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.021602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.021618] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.021715] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.021722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.021740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.021758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.047 [2024-12-07 04:28:51.021774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.047 [2024-12-07 04:28:51.021794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.047 [2024-12-07 04:28:51.021848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.047 [2024-12-07 04:28:51.021855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.047 [2024-12-07 04:28:51.021859] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.047 [2024-12-07 04:28:51.021875] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.047 [2024-12-07 04:28:51.021879] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.021883] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.021891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.021908] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.021955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.021962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.021966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.021971] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.021982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.021987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.021991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.021999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022092] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022111] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022120] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022206] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022303] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022309] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022317] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022329] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022333] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022517] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022552] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022624] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022630] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022634] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022638] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022650] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022670] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022748] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022755] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022759] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022763] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022884] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022892] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.022900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.022916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.022965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.022972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.048 [2024-12-07 04:28:51.022976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.048 [2024-12-07 04:28:51.022991] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.022995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.048 [2024-12-07 04:28:51.023000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.048 [2024-12-07 04:28:51.023007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.048 [2024-12-07 04:28:51.023024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.048 [2024-12-07 04:28:51.023077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.048 [2024-12-07 04:28:51.023084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.023190] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.023197] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023201] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.023296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.023302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023306] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023310] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023326] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023330] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.023435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.023444] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023448] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023452] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.023549] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.023556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.023673] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.023681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023685] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023716] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023721] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.023815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.023822] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023826] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.023927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.023933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.023937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.023952] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023956] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.023960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.023967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.023983] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.024030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.024036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.024040] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.024055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024059] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024063] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.024070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.024086] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.024136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.024142] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.024146] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.024161] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024166] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.024177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.049 [2024-12-07 04:28:51.024193] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.049 [2024-12-07 04:28:51.024240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.049 [2024-12-07 04:28:51.024246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.049 [2024-12-07 04:28:51.024250] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.049 [2024-12-07 04:28:51.024265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.049 [2024-12-07 04:28:51.024273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.049 [2024-12-07 04:28:51.024280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.050 [2024-12-07 04:28:51.024296] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.050 [2024-12-07 04:28:51.024351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.050 [2024-12-07 04:28:51.024357] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.050 [2024-12-07 04:28:51.024361] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.050 [2024-12-07 04:28:51.024391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.050 [2024-12-07 04:28:51.024407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.050 [2024-12-07 04:28:51.024423] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.050 [2024-12-07 04:28:51.024474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.050 [2024-12-07 04:28:51.024481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.050 [2024-12-07 04:28:51.024485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.050 [2024-12-07 04:28:51.024500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.050 [2024-12-07 04:28:51.024516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.050 [2024-12-07 04:28:51.024532] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.050 [2024-12-07 04:28:51.024586] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.050 [2024-12-07 04:28:51.024592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.050 [2024-12-07 04:28:51.024596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.050 [2024-12-07 04:28:51.024611] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.024620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.050 [2024-12-07 04:28:51.024627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.050 [2024-12-07 04:28:51.024643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.050 [2024-12-07 04:28:51.028697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.050 [2024-12-07 04:28:51.028717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.050 [2024-12-07 04:28:51.028738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.028743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.050 [2024-12-07 04:28:51.028757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.028762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.028766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c9dd30) 00:13:48.050 [2024-12-07 04:28:51.028775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:48.050 [2024-12-07 04:28:51.028798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cfc350, cid 3, qid 0 00:13:48.050 [2024-12-07 04:28:51.028848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:48.050 [2024-12-07 04:28:51.028855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:48.050 [2024-12-07 04:28:51.028859] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:48.050 [2024-12-07 04:28:51.028863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cfc350) on tqpair=0x1c9dd30 00:13:48.050 [2024-12-07 04:28:51.028871] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:13:48.050 0 Kelvin (-273 Celsius) 00:13:48.050 Available Spare: 0% 00:13:48.050 Available Spare Threshold: 0% 00:13:48.050 Life Percentage Used: 0% 00:13:48.050 Data Units Read: 0 00:13:48.050 Data Units Written: 0 00:13:48.050 Host Read Commands: 0 00:13:48.050 Host Write Commands: 0 00:13:48.050 Controller Busy Time: 0 minutes 00:13:48.050 Power Cycles: 0 00:13:48.050 Power On Hours: 0 hours 00:13:48.050 Unsafe Shutdowns: 0 00:13:48.050 Unrecoverable Media Errors: 0 00:13:48.050 Lifetime Error Log Entries: 0 00:13:48.050 Warning Temperature Time: 0 minutes 00:13:48.050 Critical Temperature Time: 0 minutes 00:13:48.050 00:13:48.050 Number of Queues 00:13:48.050 ================ 00:13:48.050 Number of I/O Submission Queues: 127 00:13:48.050 Number of I/O Completion Queues: 127 00:13:48.050 00:13:48.050 Active Namespaces 00:13:48.050 ================= 00:13:48.050 Namespace ID:1 00:13:48.050 Error Recovery Timeout: Unlimited 00:13:48.050 Command Set Identifier: NVM (00h) 00:13:48.050 Deallocate: Supported 00:13:48.050 Deallocated/Unwritten Error: Not Supported 00:13:48.050 Deallocated Read Value: Unknown 00:13:48.050 Deallocate in Write Zeroes: Not Supported 00:13:48.050 Deallocated Guard Field: 0xFFFF 00:13:48.050 Flush: Supported 00:13:48.050 Reservation: Supported 00:13:48.050 Namespace Sharing Capabilities: Multiple Controllers 00:13:48.050 Size (in LBAs): 131072 (0GiB) 00:13:48.050 Capacity (in LBAs): 131072 (0GiB) 00:13:48.050 Utilization (in LBAs): 131072 (0GiB) 00:13:48.050 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:48.050 EUI64: ABCDEF0123456789 00:13:48.050 UUID: cc3f4bf4-0e03-4c06-958a-ea4db66ea795 00:13:48.050 Thin Provisioning: Not Supported 00:13:48.050 Per-NS Atomic Units: Yes 00:13:48.050 Atomic Boundary Size (Normal): 0 00:13:48.050 Atomic Boundary Size (PFail): 0 00:13:48.050 Atomic Boundary Offset: 0 00:13:48.050 Maximum Single Source Range Length: 65535 00:13:48.050 Maximum Copy Length: 65535 00:13:48.050 Maximum Source Range Count: 1 00:13:48.050 NGUID/EUI64 Never Reused: No 00:13:48.050 Namespace Write Protected: No 00:13:48.050 Number of LBA Formats: 1 00:13:48.050 Current LBA Format: LBA Format #00 00:13:48.050 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:48.050 00:13:48.050 04:28:51 -- host/identify.sh@51 -- # sync 00:13:48.050 04:28:51 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.050 04:28:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.050 04:28:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.050 04:28:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.050 04:28:51 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:48.050 04:28:51 -- host/identify.sh@56 -- # nvmftestfini 00:13:48.050 04:28:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:48.050 04:28:51 -- nvmf/common.sh@116 -- # sync 00:13:48.050 04:28:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:48.050 04:28:51 -- nvmf/common.sh@119 -- # set +e 00:13:48.050 04:28:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:48.050 04:28:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:48.050 rmmod nvme_tcp 00:13:48.050 rmmod nvme_fabrics 00:13:48.050 rmmod nvme_keyring 00:13:48.050 04:28:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:48.050 04:28:51 -- nvmf/common.sh@123 -- # set -e 00:13:48.050 04:28:51 -- nvmf/common.sh@124 -- # return 0 00:13:48.050 04:28:51 -- nvmf/common.sh@477 -- # '[' -n 68400 ']' 00:13:48.050 04:28:51 -- nvmf/common.sh@478 -- # killprocess 68400 00:13:48.050 04:28:51 -- common/autotest_common.sh@936 -- # '[' -z 68400 ']' 00:13:48.050 04:28:51 -- common/autotest_common.sh@940 -- # kill -0 68400 00:13:48.050 04:28:51 -- common/autotest_common.sh@941 -- # uname 00:13:48.050 04:28:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.050 04:28:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68400 00:13:48.050 04:28:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:48.050 killing process with pid 68400 00:13:48.050 04:28:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:48.050 04:28:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68400' 00:13:48.050 04:28:51 -- common/autotest_common.sh@955 -- # kill 68400 00:13:48.050 [2024-12-07 04:28:51.207375] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:48.051 04:28:51 -- common/autotest_common.sh@960 -- # wait 68400 00:13:48.310 04:28:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:48.310 04:28:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:48.310 04:28:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:48.310 04:28:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.310 04:28:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:48.310 04:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.310 04:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.310 04:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.310 04:28:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:48.310 ************************************ 00:13:48.310 END TEST nvmf_identify 00:13:48.310 ************************************ 00:13:48.310 00:13:48.310 real 0m2.505s 00:13:48.310 user 0m6.966s 00:13:48.310 sys 0m0.537s 00:13:48.310 04:28:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:48.310 04:28:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.310 04:28:51 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:48.310 04:28:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.310 04:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.310 04:28:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.310 ************************************ 00:13:48.310 START TEST nvmf_perf 00:13:48.310 ************************************ 00:13:48.310 04:28:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:48.310 * Looking for test storage... 00:13:48.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:48.571 04:28:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:48.571 04:28:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:48.571 04:28:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:48.571 04:28:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:48.571 04:28:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:48.571 04:28:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:48.571 04:28:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:48.571 04:28:51 -- scripts/common.sh@335 -- # IFS=.-: 00:13:48.571 04:28:51 -- scripts/common.sh@335 -- # read -ra ver1 00:13:48.571 04:28:51 -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.571 04:28:51 -- scripts/common.sh@336 -- # read -ra ver2 00:13:48.571 04:28:51 -- scripts/common.sh@337 -- # local 'op=<' 00:13:48.571 04:28:51 -- scripts/common.sh@339 -- # ver1_l=2 00:13:48.571 04:28:51 -- scripts/common.sh@340 -- # ver2_l=1 00:13:48.571 04:28:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:48.571 04:28:51 -- scripts/common.sh@343 -- # case "$op" in 00:13:48.571 04:28:51 -- scripts/common.sh@344 -- # : 1 00:13:48.571 04:28:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:48.571 04:28:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.571 04:28:51 -- scripts/common.sh@364 -- # decimal 1 00:13:48.571 04:28:51 -- scripts/common.sh@352 -- # local d=1 00:13:48.571 04:28:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.571 04:28:51 -- scripts/common.sh@354 -- # echo 1 00:13:48.571 04:28:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:48.571 04:28:51 -- scripts/common.sh@365 -- # decimal 2 00:13:48.571 04:28:51 -- scripts/common.sh@352 -- # local d=2 00:13:48.571 04:28:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.571 04:28:51 -- scripts/common.sh@354 -- # echo 2 00:13:48.571 04:28:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:48.571 04:28:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:48.571 04:28:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:48.571 04:28:51 -- scripts/common.sh@367 -- # return 0 00:13:48.571 04:28:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.571 04:28:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.571 --rc genhtml_branch_coverage=1 00:13:48.571 --rc genhtml_function_coverage=1 00:13:48.571 --rc genhtml_legend=1 00:13:48.571 --rc geninfo_all_blocks=1 00:13:48.571 --rc geninfo_unexecuted_blocks=1 00:13:48.571 00:13:48.571 ' 00:13:48.571 04:28:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.571 --rc genhtml_branch_coverage=1 00:13:48.571 --rc genhtml_function_coverage=1 00:13:48.571 --rc genhtml_legend=1 00:13:48.571 --rc geninfo_all_blocks=1 00:13:48.571 --rc geninfo_unexecuted_blocks=1 00:13:48.571 00:13:48.571 ' 00:13:48.571 04:28:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.571 --rc genhtml_branch_coverage=1 00:13:48.571 --rc genhtml_function_coverage=1 00:13:48.571 --rc genhtml_legend=1 00:13:48.571 --rc geninfo_all_blocks=1 00:13:48.571 --rc geninfo_unexecuted_blocks=1 00:13:48.571 00:13:48.571 ' 00:13:48.571 04:28:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.571 --rc genhtml_branch_coverage=1 00:13:48.571 --rc genhtml_function_coverage=1 00:13:48.571 --rc genhtml_legend=1 00:13:48.571 --rc geninfo_all_blocks=1 00:13:48.571 --rc geninfo_unexecuted_blocks=1 00:13:48.571 00:13:48.571 ' 00:13:48.571 04:28:51 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.571 04:28:51 -- nvmf/common.sh@7 -- # uname -s 00:13:48.571 04:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.571 04:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.571 04:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.571 04:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.571 04:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.571 04:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.571 04:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.571 04:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.571 04:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.571 04:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.571 04:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:13:48.571 04:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:13:48.571 04:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.571 04:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.571 04:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.571 04:28:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.571 04:28:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.571 04:28:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.571 04:28:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.571 04:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.571 04:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.571 04:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.571 04:28:51 -- paths/export.sh@5 -- # export PATH 00:13:48.571 04:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.571 04:28:51 -- nvmf/common.sh@46 -- # : 0 00:13:48.571 04:28:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:48.571 04:28:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:48.571 04:28:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:48.571 04:28:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.571 04:28:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.571 04:28:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:48.571 04:28:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:48.571 04:28:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:48.571 04:28:51 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:48.571 04:28:51 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:48.571 04:28:51 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.571 04:28:51 -- host/perf.sh@17 -- # nvmftestinit 00:13:48.571 04:28:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:48.571 04:28:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.571 04:28:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:48.571 04:28:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:48.571 04:28:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:48.571 04:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.571 04:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.571 04:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.571 04:28:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:48.571 04:28:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:48.571 04:28:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:48.571 04:28:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:48.571 04:28:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:48.571 04:28:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:48.571 04:28:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.571 04:28:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.571 04:28:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.571 04:28:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:48.571 04:28:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.571 04:28:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.571 04:28:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.571 04:28:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.571 04:28:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.571 04:28:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.571 04:28:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.571 04:28:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.571 04:28:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:48.571 04:28:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:48.571 Cannot find device "nvmf_tgt_br" 00:13:48.571 04:28:51 -- nvmf/common.sh@154 -- # true 00:13:48.571 04:28:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.571 Cannot find device "nvmf_tgt_br2" 00:13:48.571 04:28:51 -- nvmf/common.sh@155 -- # true 00:13:48.571 04:28:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:48.572 04:28:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:48.572 Cannot find device "nvmf_tgt_br" 00:13:48.572 04:28:51 -- nvmf/common.sh@157 -- # true 00:13:48.572 04:28:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:48.572 Cannot find device "nvmf_tgt_br2" 00:13:48.572 04:28:51 -- nvmf/common.sh@158 -- # true 00:13:48.572 04:28:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:48.572 04:28:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:48.572 04:28:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.832 04:28:51 -- nvmf/common.sh@161 -- # true 00:13:48.832 04:28:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.832 04:28:51 -- nvmf/common.sh@162 -- # true 00:13:48.832 04:28:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.832 04:28:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.832 04:28:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.832 04:28:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.832 04:28:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.832 04:28:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.832 04:28:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.832 04:28:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:48.832 04:28:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:48.832 04:28:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:48.832 04:28:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:48.832 04:28:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:48.832 04:28:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:48.832 04:28:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:48.832 04:28:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:48.832 04:28:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:48.832 04:28:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:48.832 04:28:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:48.832 04:28:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:48.832 04:28:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:48.832 04:28:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:48.832 04:28:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:48.832 04:28:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:48.832 04:28:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:48.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:13:48.832 00:13:48.832 --- 10.0.0.2 ping statistics --- 00:13:48.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.832 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:48.832 04:28:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:48.832 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:48.832 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:48.832 00:13:48.832 --- 10.0.0.3 ping statistics --- 00:13:48.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.832 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:48.833 04:28:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:48.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:48.833 00:13:48.833 --- 10.0.0.1 ping statistics --- 00:13:48.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.833 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:48.833 04:28:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.833 04:28:51 -- nvmf/common.sh@421 -- # return 0 00:13:48.833 04:28:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:48.833 04:28:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.833 04:28:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:48.833 04:28:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:48.833 04:28:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.833 04:28:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:48.833 04:28:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:48.833 04:28:51 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:48.833 04:28:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:48.833 04:28:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:48.833 04:28:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.833 04:28:52 -- nvmf/common.sh@469 -- # nvmfpid=68609 00:13:48.833 04:28:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.833 04:28:52 -- nvmf/common.sh@470 -- # waitforlisten 68609 00:13:48.833 04:28:52 -- common/autotest_common.sh@829 -- # '[' -z 68609 ']' 00:13:48.833 04:28:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.833 04:28:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.833 04:28:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.833 04:28:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.833 04:28:52 -- common/autotest_common.sh@10 -- # set +x 00:13:48.833 [2024-12-07 04:28:52.049075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.833 [2024-12-07 04:28:52.049154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.093 [2024-12-07 04:28:52.182486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.093 [2024-12-07 04:28:52.239899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:49.093 [2024-12-07 04:28:52.240054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.093 [2024-12-07 04:28:52.240068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.093 [2024-12-07 04:28:52.240075] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.093 [2024-12-07 04:28:52.240481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.093 [2024-12-07 04:28:52.240613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.093 [2024-12-07 04:28:52.240733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.093 [2024-12-07 04:28:52.240739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.033 04:28:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.033 04:28:53 -- common/autotest_common.sh@862 -- # return 0 00:13:50.033 04:28:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:50.033 04:28:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.033 04:28:53 -- common/autotest_common.sh@10 -- # set +x 00:13:50.033 04:28:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.033 04:28:53 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:50.033 04:28:53 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:50.602 04:28:53 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:50.602 04:28:53 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:50.602 04:28:53 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:13:50.602 04:28:53 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:51.171 04:28:54 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:51.171 04:28:54 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:13:51.171 04:28:54 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:51.171 04:28:54 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:51.171 04:28:54 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:51.171 [2024-12-07 04:28:54.359250] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.171 04:28:54 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:51.431 04:28:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:51.431 04:28:54 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.692 04:28:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:51.692 04:28:54 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:52.041 04:28:55 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.299 [2024-12-07 04:28:55.288439] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.299 04:28:55 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:52.299 04:28:55 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:13:52.299 04:28:55 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:52.299 04:28:55 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:52.299 04:28:55 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:53.675 Initializing NVMe Controllers 00:13:53.675 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:53.675 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:13:53.675 Initialization complete. Launching workers. 00:13:53.675 ======================================================== 00:13:53.675 Latency(us) 00:13:53.675 Device Information : IOPS MiB/s Average min max 00:13:53.675 PCIE (0000:00:06.0) NSID 1 from core 0: 22642.22 88.45 1413.64 324.79 9036.61 00:13:53.675 ======================================================== 00:13:53.675 Total : 22642.22 88.45 1413.64 324.79 9036.61 00:13:53.675 00:13:53.675 04:28:56 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:55.054 Initializing NVMe Controllers 00:13:55.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:55.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:55.054 Initialization complete. Launching workers. 00:13:55.054 ======================================================== 00:13:55.054 Latency(us) 00:13:55.054 Device Information : IOPS MiB/s Average min max 00:13:55.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3465.99 13.54 288.24 99.32 4264.38 00:13:55.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.56 7871.67 11998.83 00:13:55.054 ======================================================== 00:13:55.054 Total : 3590.49 14.03 558.95 99.32 11998.83 00:13:55.054 00:13:55.054 04:28:57 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:56.432 Initializing NVMe Controllers 00:13:56.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:56.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:56.432 Initialization complete. Launching workers. 00:13:56.432 ======================================================== 00:13:56.432 Latency(us) 00:13:56.432 Device Information : IOPS MiB/s Average min max 00:13:56.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9105.76 35.57 3518.13 400.11 9018.05 00:13:56.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3960.90 15.47 8134.93 6031.01 15254.84 00:13:56.432 ======================================================== 00:13:56.432 Total : 13066.66 51.04 4917.62 400.11 15254.84 00:13:56.432 00:13:56.432 04:28:59 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:56.432 04:28:59 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:58.964 Initializing NVMe Controllers 00:13:58.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.965 Controller IO queue size 128, less than required. 00:13:58.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.965 Controller IO queue size 128, less than required. 00:13:58.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.965 Initialization complete. Launching workers. 00:13:58.965 ======================================================== 00:13:58.965 Latency(us) 00:13:58.965 Device Information : IOPS MiB/s Average min max 00:13:58.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2025.83 506.46 65517.27 33944.59 125660.45 00:13:58.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 693.03 173.26 197624.59 104066.15 316536.56 00:13:58.965 ======================================================== 00:13:58.965 Total : 2718.86 679.72 99191.21 33944.59 316536.56 00:13:58.965 00:13:58.965 04:29:01 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:58.965 No valid NVMe controllers or AIO or URING devices found 00:13:58.965 Initializing NVMe Controllers 00:13:58.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.965 Controller IO queue size 128, less than required. 00:13:58.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.965 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:58.965 Controller IO queue size 128, less than required. 00:13:58.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.965 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:58.965 WARNING: Some requested NVMe devices were skipped 00:13:58.965 04:29:02 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:01.495 Initializing NVMe Controllers 00:14:01.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.495 Controller IO queue size 128, less than required. 00:14:01.495 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.495 Controller IO queue size 128, less than required. 00:14:01.495 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:01.495 Initialization complete. Launching workers. 00:14:01.495 00:14:01.495 ==================== 00:14:01.495 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:01.495 TCP transport: 00:14:01.495 polls: 9577 00:14:01.495 idle_polls: 0 00:14:01.495 sock_completions: 9577 00:14:01.495 nvme_completions: 7188 00:14:01.495 submitted_requests: 10914 00:14:01.495 queued_requests: 1 00:14:01.495 00:14:01.495 ==================== 00:14:01.495 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:01.495 TCP transport: 00:14:01.495 polls: 10370 00:14:01.495 idle_polls: 0 00:14:01.495 sock_completions: 10370 00:14:01.495 nvme_completions: 6430 00:14:01.495 submitted_requests: 9760 00:14:01.495 queued_requests: 1 00:14:01.495 ======================================================== 00:14:01.495 Latency(us) 00:14:01.495 Device Information : IOPS MiB/s Average min max 00:14:01.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1859.76 464.94 69720.06 39154.31 130867.22 00:14:01.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1670.34 417.58 77588.91 35829.20 130428.27 00:14:01.495 ======================================================== 00:14:01.495 Total : 3530.10 882.53 73443.36 35829.20 130867.22 00:14:01.495 00:14:01.495 04:29:04 -- host/perf.sh@66 -- # sync 00:14:01.495 04:29:04 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.754 04:29:04 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:01.754 04:29:04 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:01.754 04:29:04 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:02.011 04:29:05 -- host/perf.sh@72 -- # ls_guid=e469ee45-d59a-4243-ad67-ea2e60d9e5bc 00:14:02.011 04:29:05 -- host/perf.sh@73 -- # get_lvs_free_mb e469ee45-d59a-4243-ad67-ea2e60d9e5bc 00:14:02.011 04:29:05 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e469ee45-d59a-4243-ad67-ea2e60d9e5bc 00:14:02.011 04:29:05 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:02.011 04:29:05 -- common/autotest_common.sh@1355 -- # local fc 00:14:02.011 04:29:05 -- common/autotest_common.sh@1356 -- # local cs 00:14:02.011 04:29:05 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:02.577 04:29:05 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:02.577 { 00:14:02.577 "uuid": "e469ee45-d59a-4243-ad67-ea2e60d9e5bc", 00:14:02.577 "name": "lvs_0", 00:14:02.577 "base_bdev": "Nvme0n1", 00:14:02.577 "total_data_clusters": 1278, 00:14:02.577 "free_clusters": 1278, 00:14:02.577 "block_size": 4096, 00:14:02.577 "cluster_size": 4194304 00:14:02.577 } 00:14:02.577 ]' 00:14:02.577 04:29:05 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e469ee45-d59a-4243-ad67-ea2e60d9e5bc") .free_clusters' 00:14:02.577 04:29:05 -- common/autotest_common.sh@1358 -- # fc=1278 00:14:02.577 04:29:05 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e469ee45-d59a-4243-ad67-ea2e60d9e5bc") .cluster_size' 00:14:02.577 5112 00:14:02.577 04:29:05 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:02.577 04:29:05 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:14:02.577 04:29:05 -- common/autotest_common.sh@1363 -- # echo 5112 00:14:02.577 04:29:05 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:02.577 04:29:05 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e469ee45-d59a-4243-ad67-ea2e60d9e5bc lbd_0 5112 00:14:02.834 04:29:05 -- host/perf.sh@80 -- # lb_guid=d273cf7d-dfbf-4e11-84be-b6a6244b6ce6 00:14:02.834 04:29:05 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore d273cf7d-dfbf-4e11-84be-b6a6244b6ce6 lvs_n_0 00:14:03.091 04:29:06 -- host/perf.sh@83 -- # ls_nested_guid=bcb3ecef-f958-4ac7-a3fa-9e518fb6e9d5 00:14:03.091 04:29:06 -- host/perf.sh@84 -- # get_lvs_free_mb bcb3ecef-f958-4ac7-a3fa-9e518fb6e9d5 00:14:03.091 04:29:06 -- common/autotest_common.sh@1353 -- # local lvs_uuid=bcb3ecef-f958-4ac7-a3fa-9e518fb6e9d5 00:14:03.091 04:29:06 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:03.091 04:29:06 -- common/autotest_common.sh@1355 -- # local fc 00:14:03.091 04:29:06 -- common/autotest_common.sh@1356 -- # local cs 00:14:03.091 04:29:06 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:03.349 04:29:06 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:03.349 { 00:14:03.349 "uuid": "e469ee45-d59a-4243-ad67-ea2e60d9e5bc", 00:14:03.349 "name": "lvs_0", 00:14:03.349 "base_bdev": "Nvme0n1", 00:14:03.349 "total_data_clusters": 1278, 00:14:03.349 "free_clusters": 0, 00:14:03.349 "block_size": 4096, 00:14:03.349 "cluster_size": 4194304 00:14:03.349 }, 00:14:03.349 { 00:14:03.349 "uuid": "bcb3ecef-f958-4ac7-a3fa-9e518fb6e9d5", 00:14:03.349 "name": "lvs_n_0", 00:14:03.349 "base_bdev": "d273cf7d-dfbf-4e11-84be-b6a6244b6ce6", 00:14:03.349 "total_data_clusters": 1276, 00:14:03.349 "free_clusters": 1276, 00:14:03.349 "block_size": 4096, 00:14:03.349 "cluster_size": 4194304 00:14:03.349 } 00:14:03.349 ]' 00:14:03.349 04:29:06 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="bcb3ecef-f958-4ac7-a3fa-9e518fb6e9d5") .free_clusters' 00:14:03.349 04:29:06 -- common/autotest_common.sh@1358 -- # fc=1276 00:14:03.349 04:29:06 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="bcb3ecef-f958-4ac7-a3fa-9e518fb6e9d5") .cluster_size' 00:14:03.349 5104 00:14:03.349 04:29:06 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:03.349 04:29:06 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:14:03.349 04:29:06 -- common/autotest_common.sh@1363 -- # echo 5104 00:14:03.349 04:29:06 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:03.349 04:29:06 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bcb3ecef-f958-4ac7-a3fa-9e518fb6e9d5 lbd_nest_0 5104 00:14:03.607 04:29:06 -- host/perf.sh@88 -- # lb_nested_guid=b8c9ad1a-68b2-472d-9fa2-875a314db71b 00:14:03.607 04:29:06 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:04.173 04:29:07 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:04.173 04:29:07 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b8c9ad1a-68b2-472d-9fa2-875a314db71b 00:14:04.173 04:29:07 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.431 04:29:07 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:04.431 04:29:07 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:04.431 04:29:07 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:04.431 04:29:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:04.431 04:29:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:04.997 No valid NVMe controllers or AIO or URING devices found 00:14:04.997 Initializing NVMe Controllers 00:14:04.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:04.997 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:04.997 WARNING: Some requested NVMe devices were skipped 00:14:04.997 04:29:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:04.997 04:29:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:14.975 Initializing NVMe Controllers 00:14:14.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:14.976 Initialization complete. Launching workers. 00:14:14.976 ======================================================== 00:14:14.976 Latency(us) 00:14:14.976 Device Information : IOPS MiB/s Average min max 00:14:14.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 970.10 121.26 1030.37 303.12 8536.90 00:14:14.976 ======================================================== 00:14:14.976 Total : 970.10 121.26 1030.37 303.12 8536.90 00:14:14.976 00:14:14.976 04:29:18 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:14.976 04:29:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:14.976 04:29:18 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:15.543 No valid NVMe controllers or AIO or URING devices found 00:14:15.543 Initializing NVMe Controllers 00:14:15.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.543 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:15.543 WARNING: Some requested NVMe devices were skipped 00:14:15.543 04:29:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:15.543 04:29:18 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:27.747 Initializing NVMe Controllers 00:14:27.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.747 Initialization complete. Launching workers. 00:14:27.747 ======================================================== 00:14:27.747 Latency(us) 00:14:27.747 Device Information : IOPS MiB/s Average min max 00:14:27.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1332.41 166.55 24039.60 5505.39 59981.44 00:14:27.748 ======================================================== 00:14:27.748 Total : 1332.41 166.55 24039.60 5505.39 59981.44 00:14:27.748 00:14:27.748 04:29:28 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:27.748 04:29:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:27.748 04:29:28 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:27.748 No valid NVMe controllers or AIO or URING devices found 00:14:27.748 Initializing NVMe Controllers 00:14:27.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.748 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:27.748 WARNING: Some requested NVMe devices were skipped 00:14:27.748 04:29:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:27.748 04:29:29 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:37.724 Initializing NVMe Controllers 00:14:37.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.724 Controller IO queue size 128, less than required. 00:14:37.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:37.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:37.724 Initialization complete. Launching workers. 00:14:37.724 ======================================================== 00:14:37.724 Latency(us) 00:14:37.724 Device Information : IOPS MiB/s Average min max 00:14:37.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4089.80 511.22 31302.29 12262.53 60659.88 00:14:37.724 ======================================================== 00:14:37.724 Total : 4089.80 511.22 31302.29 12262.53 60659.88 00:14:37.724 00:14:37.724 04:29:39 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.724 04:29:39 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b8c9ad1a-68b2-472d-9fa2-875a314db71b 00:14:37.724 04:29:40 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:37.724 04:29:40 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d273cf7d-dfbf-4e11-84be-b6a6244b6ce6 00:14:37.724 04:29:40 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:37.724 04:29:40 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:37.724 04:29:40 -- host/perf.sh@114 -- # nvmftestfini 00:14:37.724 04:29:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:37.724 04:29:40 -- nvmf/common.sh@116 -- # sync 00:14:37.724 04:29:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:37.724 04:29:40 -- nvmf/common.sh@119 -- # set +e 00:14:37.724 04:29:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:37.724 04:29:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:37.724 rmmod nvme_tcp 00:14:37.724 rmmod nvme_fabrics 00:14:37.724 rmmod nvme_keyring 00:14:37.724 04:29:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:37.724 04:29:40 -- nvmf/common.sh@123 -- # set -e 00:14:37.724 04:29:40 -- nvmf/common.sh@124 -- # return 0 00:14:37.724 04:29:40 -- nvmf/common.sh@477 -- # '[' -n 68609 ']' 00:14:37.724 04:29:40 -- nvmf/common.sh@478 -- # killprocess 68609 00:14:37.724 04:29:40 -- common/autotest_common.sh@936 -- # '[' -z 68609 ']' 00:14:37.724 04:29:40 -- common/autotest_common.sh@940 -- # kill -0 68609 00:14:37.724 04:29:40 -- common/autotest_common.sh@941 -- # uname 00:14:37.984 04:29:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.984 04:29:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68609 00:14:37.984 killing process with pid 68609 00:14:37.984 04:29:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:37.984 04:29:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:37.984 04:29:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68609' 00:14:37.984 04:29:40 -- common/autotest_common.sh@955 -- # kill 68609 00:14:37.984 04:29:40 -- common/autotest_common.sh@960 -- # wait 68609 00:14:39.368 04:29:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:39.368 04:29:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:39.368 04:29:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:39.368 04:29:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.368 04:29:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:39.368 04:29:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.368 04:29:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.368 04:29:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.368 04:29:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:39.368 00:14:39.368 real 0m50.848s 00:14:39.368 user 3m12.221s 00:14:39.368 sys 0m12.748s 00:14:39.368 04:29:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:39.368 04:29:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.368 ************************************ 00:14:39.368 END TEST nvmf_perf 00:14:39.368 ************************************ 00:14:39.368 04:29:42 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:39.368 04:29:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:39.369 04:29:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.369 04:29:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.369 ************************************ 00:14:39.369 START TEST nvmf_fio_host 00:14:39.369 ************************************ 00:14:39.369 04:29:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:39.369 * Looking for test storage... 00:14:39.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:39.369 04:29:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:39.369 04:29:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:39.369 04:29:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:39.369 04:29:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:39.369 04:29:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:39.369 04:29:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:39.369 04:29:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:39.369 04:29:42 -- scripts/common.sh@335 -- # IFS=.-: 00:14:39.369 04:29:42 -- scripts/common.sh@335 -- # read -ra ver1 00:14:39.369 04:29:42 -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.369 04:29:42 -- scripts/common.sh@336 -- # read -ra ver2 00:14:39.369 04:29:42 -- scripts/common.sh@337 -- # local 'op=<' 00:14:39.369 04:29:42 -- scripts/common.sh@339 -- # ver1_l=2 00:14:39.369 04:29:42 -- scripts/common.sh@340 -- # ver2_l=1 00:14:39.369 04:29:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:39.369 04:29:42 -- scripts/common.sh@343 -- # case "$op" in 00:14:39.369 04:29:42 -- scripts/common.sh@344 -- # : 1 00:14:39.369 04:29:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:39.369 04:29:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.369 04:29:42 -- scripts/common.sh@364 -- # decimal 1 00:14:39.369 04:29:42 -- scripts/common.sh@352 -- # local d=1 00:14:39.369 04:29:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.369 04:29:42 -- scripts/common.sh@354 -- # echo 1 00:14:39.369 04:29:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:39.369 04:29:42 -- scripts/common.sh@365 -- # decimal 2 00:14:39.369 04:29:42 -- scripts/common.sh@352 -- # local d=2 00:14:39.369 04:29:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.369 04:29:42 -- scripts/common.sh@354 -- # echo 2 00:14:39.369 04:29:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:39.369 04:29:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:39.369 04:29:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:39.369 04:29:42 -- scripts/common.sh@367 -- # return 0 00:14:39.369 04:29:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.369 04:29:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.369 --rc genhtml_branch_coverage=1 00:14:39.369 --rc genhtml_function_coverage=1 00:14:39.369 --rc genhtml_legend=1 00:14:39.369 --rc geninfo_all_blocks=1 00:14:39.369 --rc geninfo_unexecuted_blocks=1 00:14:39.369 00:14:39.369 ' 00:14:39.369 04:29:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.369 --rc genhtml_branch_coverage=1 00:14:39.369 --rc genhtml_function_coverage=1 00:14:39.369 --rc genhtml_legend=1 00:14:39.369 --rc geninfo_all_blocks=1 00:14:39.369 --rc geninfo_unexecuted_blocks=1 00:14:39.369 00:14:39.369 ' 00:14:39.369 04:29:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.369 --rc genhtml_branch_coverage=1 00:14:39.369 --rc genhtml_function_coverage=1 00:14:39.369 --rc genhtml_legend=1 00:14:39.369 --rc geninfo_all_blocks=1 00:14:39.369 --rc geninfo_unexecuted_blocks=1 00:14:39.369 00:14:39.369 ' 00:14:39.369 04:29:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:39.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.369 --rc genhtml_branch_coverage=1 00:14:39.369 --rc genhtml_function_coverage=1 00:14:39.369 --rc genhtml_legend=1 00:14:39.369 --rc geninfo_all_blocks=1 00:14:39.369 --rc geninfo_unexecuted_blocks=1 00:14:39.369 00:14:39.369 ' 00:14:39.369 04:29:42 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.369 04:29:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.369 04:29:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.369 04:29:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.369 04:29:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- paths/export.sh@5 -- # export PATH 00:14:39.369 04:29:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.369 04:29:42 -- nvmf/common.sh@7 -- # uname -s 00:14:39.369 04:29:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.369 04:29:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.369 04:29:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.369 04:29:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.369 04:29:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.369 04:29:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.369 04:29:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.369 04:29:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.369 04:29:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.369 04:29:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.369 04:29:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:14:39.369 04:29:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:14:39.369 04:29:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.369 04:29:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.369 04:29:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:39.369 04:29:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.369 04:29:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.369 04:29:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.369 04:29:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.369 04:29:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- paths/export.sh@5 -- # export PATH 00:14:39.369 04:29:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.369 04:29:42 -- nvmf/common.sh@46 -- # : 0 00:14:39.369 04:29:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:39.369 04:29:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:39.369 04:29:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:39.369 04:29:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.369 04:29:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.369 04:29:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:39.369 04:29:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:39.369 04:29:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:39.369 04:29:42 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.369 04:29:42 -- host/fio.sh@14 -- # nvmftestinit 00:14:39.369 04:29:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:39.369 04:29:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.369 04:29:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:39.369 04:29:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:39.369 04:29:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:39.369 04:29:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.369 04:29:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.369 04:29:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.369 04:29:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:39.370 04:29:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:39.370 04:29:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:39.370 04:29:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:39.370 04:29:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:39.370 04:29:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:39.370 04:29:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.370 04:29:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.370 04:29:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:39.370 04:29:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:39.370 04:29:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:39.370 04:29:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:39.370 04:29:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:39.370 04:29:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.370 04:29:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:39.370 04:29:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:39.370 04:29:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:39.370 04:29:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:39.370 04:29:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:39.370 04:29:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:39.628 Cannot find device "nvmf_tgt_br" 00:14:39.628 04:29:42 -- nvmf/common.sh@154 -- # true 00:14:39.628 04:29:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:39.628 Cannot find device "nvmf_tgt_br2" 00:14:39.628 04:29:42 -- nvmf/common.sh@155 -- # true 00:14:39.628 04:29:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:39.628 04:29:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:39.628 Cannot find device "nvmf_tgt_br" 00:14:39.628 04:29:42 -- nvmf/common.sh@157 -- # true 00:14:39.628 04:29:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:39.628 Cannot find device "nvmf_tgt_br2" 00:14:39.628 04:29:42 -- nvmf/common.sh@158 -- # true 00:14:39.628 04:29:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:39.628 04:29:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:39.628 04:29:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.628 04:29:42 -- nvmf/common.sh@161 -- # true 00:14:39.628 04:29:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.628 04:29:42 -- nvmf/common.sh@162 -- # true 00:14:39.628 04:29:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.628 04:29:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.628 04:29:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.628 04:29:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.628 04:29:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.628 04:29:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.628 04:29:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.628 04:29:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:39.628 04:29:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:39.628 04:29:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:39.628 04:29:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:39.628 04:29:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:39.628 04:29:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:39.628 04:29:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.628 04:29:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:39.628 04:29:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:39.628 04:29:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:39.628 04:29:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:39.628 04:29:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:39.628 04:29:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:39.628 04:29:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:39.887 04:29:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:39.887 04:29:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:39.887 04:29:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:39.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:39.887 00:14:39.887 --- 10.0.0.2 ping statistics --- 00:14:39.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.887 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:39.887 04:29:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:39.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:39.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:39.887 00:14:39.887 --- 10.0.0.3 ping statistics --- 00:14:39.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.887 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:39.887 04:29:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:39.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:39.887 00:14:39.887 --- 10.0.0.1 ping statistics --- 00:14:39.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.887 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:39.887 04:29:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.887 04:29:42 -- nvmf/common.sh@421 -- # return 0 00:14:39.887 04:29:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:39.887 04:29:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.887 04:29:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:39.887 04:29:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:39.887 04:29:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.887 04:29:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:39.887 04:29:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:39.887 04:29:42 -- host/fio.sh@16 -- # [[ y != y ]] 00:14:39.887 04:29:42 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:39.887 04:29:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.887 04:29:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.887 04:29:42 -- host/fio.sh@24 -- # nvmfpid=69434 00:14:39.887 04:29:42 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:39.887 04:29:42 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.887 04:29:42 -- host/fio.sh@28 -- # waitforlisten 69434 00:14:39.887 04:29:42 -- common/autotest_common.sh@829 -- # '[' -z 69434 ']' 00:14:39.887 04:29:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.887 04:29:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.887 04:29:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.887 04:29:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.887 04:29:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.887 [2024-12-07 04:29:42.982888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:39.887 [2024-12-07 04:29:42.982995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.887 [2024-12-07 04:29:43.124732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.145 [2024-12-07 04:29:43.194744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:40.145 [2024-12-07 04:29:43.195169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.145 [2024-12-07 04:29:43.195308] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.145 [2024-12-07 04:29:43.195440] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.145 [2024-12-07 04:29:43.195684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.145 [2024-12-07 04:29:43.195858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.145 [2024-12-07 04:29:43.196064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.145 [2024-12-07 04:29:43.196078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.709 04:29:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.709 04:29:43 -- common/autotest_common.sh@862 -- # return 0 00:14:40.709 04:29:43 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:40.965 [2024-12-07 04:29:44.171197] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.222 04:29:44 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:41.222 04:29:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.222 04:29:44 -- common/autotest_common.sh@10 -- # set +x 00:14:41.222 04:29:44 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:41.479 Malloc1 00:14:41.479 04:29:44 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:41.737 04:29:44 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:41.995 04:29:45 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.252 [2024-12-07 04:29:45.255079] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.252 04:29:45 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.510 04:29:45 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:42.510 04:29:45 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:42.510 04:29:45 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:42.510 04:29:45 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:42.510 04:29:45 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:42.510 04:29:45 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:42.510 04:29:45 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:42.510 04:29:45 -- common/autotest_common.sh@1330 -- # shift 00:14:42.510 04:29:45 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:42.510 04:29:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:42.510 04:29:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:42.510 04:29:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:42.510 04:29:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:42.510 04:29:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:42.510 04:29:45 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:42.510 04:29:45 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:42.510 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:42.510 fio-3.35 00:14:42.510 Starting 1 thread 00:14:45.039 00:14:45.039 test: (groupid=0, jobs=1): err= 0: pid=69517: Sat Dec 7 04:29:47 2024 00:14:45.039 read: IOPS=9420, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2005msec) 00:14:45.039 slat (nsec): min=1877, max=345246, avg=2579.96, stdev=3335.96 00:14:45.039 clat (usec): min=2307, max=12345, avg=7058.39, stdev=590.62 00:14:45.039 lat (usec): min=2358, max=12347, avg=7060.97, stdev=590.42 00:14:45.039 clat percentiles (usec): 00:14:45.039 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:14:45.039 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:14:45.039 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:14:45.039 | 99.00th=[ 8356], 99.50th=[ 9765], 99.90th=[11731], 99.95th=[11994], 00:14:45.039 | 99.99th=[12387] 00:14:45.039 bw ( KiB/s): min=36680, max=38400, per=99.87%, avg=37636.00, stdev=758.51, samples=4 00:14:45.039 iops : min= 9170, max= 9600, avg=9409.00, stdev=189.63, samples=4 00:14:45.039 write: IOPS=9420, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2005msec); 0 zone resets 00:14:45.039 slat (nsec): min=1960, max=193447, avg=2671.92, stdev=2023.41 00:14:45.039 clat (usec): min=2174, max=11675, avg=6477.92, stdev=541.46 00:14:45.039 lat (usec): min=2186, max=11677, avg=6480.59, stdev=541.42 00:14:45.039 clat percentiles (usec): 00:14:45.039 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:14:45.039 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:14:45.039 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:14:45.039 | 99.00th=[ 7635], 99.50th=[ 8291], 99.90th=[11076], 99.95th=[11469], 00:14:45.039 | 99.99th=[11469] 00:14:45.039 bw ( KiB/s): min=37488, max=38064, per=99.95%, avg=37666.00, stdev=268.24, samples=4 00:14:45.039 iops : min= 9372, max= 9516, avg=9416.50, stdev=67.06, samples=4 00:14:45.039 lat (msec) : 4=0.12%, 10=99.45%, 20=0.43% 00:14:45.039 cpu : usr=67.66%, sys=23.60%, ctx=25, majf=0, minf=5 00:14:45.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:45.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:45.039 issued rwts: total=18889,18889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:45.039 00:14:45.039 Run status group 0 (all jobs): 00:14:45.039 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2005-2005msec 00:14:45.039 WRITE: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2005-2005msec 00:14:45.039 04:29:47 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:45.039 04:29:47 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:45.039 04:29:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:45.039 04:29:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:45.039 04:29:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:45.039 04:29:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:45.039 04:29:47 -- common/autotest_common.sh@1330 -- # shift 00:14:45.039 04:29:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:45.039 04:29:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:45.039 04:29:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:45.039 04:29:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:45.039 04:29:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:45.039 04:29:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:45.039 04:29:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:45.039 04:29:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:45.039 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:45.039 fio-3.35 00:14:45.039 Starting 1 thread 00:14:47.574 00:14:47.574 test: (groupid=0, jobs=1): err= 0: pid=69566: Sat Dec 7 04:29:50 2024 00:14:47.574 read: IOPS=8675, BW=136MiB/s (142MB/s)(272MiB/2007msec) 00:14:47.574 slat (usec): min=2, max=164, avg= 3.88, stdev= 2.73 00:14:47.574 clat (usec): min=1981, max=17371, avg=8079.29, stdev=2633.54 00:14:47.574 lat (usec): min=1985, max=17374, avg=8083.16, stdev=2633.67 00:14:47.574 clat percentiles (usec): 00:14:47.574 | 1.00th=[ 3851], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5800], 00:14:47.574 | 30.00th=[ 6390], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8356], 00:14:47.574 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[11600], 95.00th=[13566], 00:14:47.574 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16909], 99.95th=[16909], 00:14:47.574 | 99.99th=[17171] 00:14:47.574 bw ( KiB/s): min=62432, max=75113, per=49.93%, avg=69314.25, stdev=5929.19, samples=4 00:14:47.574 iops : min= 3902, max= 4694, avg=4332.00, stdev=370.39, samples=4 00:14:47.574 write: IOPS=4951, BW=77.4MiB/s (81.1MB/s)(141MiB/1823msec); 0 zone resets 00:14:47.574 slat (usec): min=32, max=370, avg=39.17, stdev= 9.72 00:14:47.574 clat (usec): min=2652, max=21335, avg=11830.82, stdev=1989.52 00:14:47.574 lat (usec): min=2686, max=21370, avg=11869.99, stdev=1990.74 00:14:47.574 clat percentiles (usec): 00:14:47.574 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:14:47.574 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:14:47.574 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14484], 95.00th=[15401], 00:14:47.574 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:14:47.574 | 99.99th=[21365] 00:14:47.574 bw ( KiB/s): min=65312, max=77892, per=91.10%, avg=72177.00, stdev=5941.72, samples=4 00:14:47.574 iops : min= 4082, max= 4868, avg=4511.00, stdev=371.28, samples=4 00:14:47.574 lat (msec) : 2=0.01%, 4=1.10%, 10=56.62%, 20=42.27%, 50=0.01% 00:14:47.574 cpu : usr=78.32%, sys=16.15%, ctx=3, majf=0, minf=1 00:14:47.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:47.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:47.574 issued rwts: total=17412,9027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:47.574 00:14:47.574 Run status group 0 (all jobs): 00:14:47.574 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=272MiB (285MB), run=2007-2007msec 00:14:47.574 WRITE: bw=77.4MiB/s (81.1MB/s), 77.4MiB/s-77.4MiB/s (81.1MB/s-81.1MB/s), io=141MiB (148MB), run=1823-1823msec 00:14:47.574 04:29:50 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.574 04:29:50 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:14:47.574 04:29:50 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:14:47.574 04:29:50 -- host/fio.sh@51 -- # get_nvme_bdfs 00:14:47.574 04:29:50 -- common/autotest_common.sh@1508 -- # bdfs=() 00:14:47.574 04:29:50 -- common/autotest_common.sh@1508 -- # local bdfs 00:14:47.574 04:29:50 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:47.574 04:29:50 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:47.574 04:29:50 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:14:47.834 04:29:50 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:14:47.834 04:29:50 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:14:47.834 04:29:50 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:14:48.093 Nvme0n1 00:14:48.093 04:29:51 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:14:48.352 04:29:51 -- host/fio.sh@53 -- # ls_guid=eecce27a-bb69-499a-8d9a-b806ee47c93f 00:14:48.352 04:29:51 -- host/fio.sh@54 -- # get_lvs_free_mb eecce27a-bb69-499a-8d9a-b806ee47c93f 00:14:48.352 04:29:51 -- common/autotest_common.sh@1353 -- # local lvs_uuid=eecce27a-bb69-499a-8d9a-b806ee47c93f 00:14:48.352 04:29:51 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:48.352 04:29:51 -- common/autotest_common.sh@1355 -- # local fc 00:14:48.352 04:29:51 -- common/autotest_common.sh@1356 -- # local cs 00:14:48.352 04:29:51 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:48.627 04:29:51 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:48.627 { 00:14:48.627 "uuid": "eecce27a-bb69-499a-8d9a-b806ee47c93f", 00:14:48.627 "name": "lvs_0", 00:14:48.627 "base_bdev": "Nvme0n1", 00:14:48.627 "total_data_clusters": 4, 00:14:48.627 "free_clusters": 4, 00:14:48.627 "block_size": 4096, 00:14:48.627 "cluster_size": 1073741824 00:14:48.627 } 00:14:48.627 ]' 00:14:48.627 04:29:51 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="eecce27a-bb69-499a-8d9a-b806ee47c93f") .free_clusters' 00:14:48.627 04:29:51 -- common/autotest_common.sh@1358 -- # fc=4 00:14:48.627 04:29:51 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="eecce27a-bb69-499a-8d9a-b806ee47c93f") .cluster_size' 00:14:48.627 04:29:51 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:14:48.627 04:29:51 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:14:48.627 4096 00:14:48.627 04:29:51 -- common/autotest_common.sh@1363 -- # echo 4096 00:14:48.627 04:29:51 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:14:48.887 66039700-73ac-43b3-a052-88825eb3b7cf 00:14:48.887 04:29:52 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:14:49.146 04:29:52 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:14:49.406 04:29:52 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:49.665 04:29:52 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.665 04:29:52 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.665 04:29:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:49.665 04:29:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.665 04:29:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:49.665 04:29:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.665 04:29:52 -- common/autotest_common.sh@1330 -- # shift 00:14:49.665 04:29:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:49.665 04:29:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:49.665 04:29:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:49.665 04:29:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:49.665 04:29:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:49.665 04:29:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:49.665 04:29:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:49.665 04:29:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.923 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:49.923 fio-3.35 00:14:49.923 Starting 1 thread 00:14:52.483 00:14:52.483 test: (groupid=0, jobs=1): err= 0: pid=69670: Sat Dec 7 04:29:55 2024 00:14:52.483 read: IOPS=6514, BW=25.4MiB/s (26.7MB/s)(51.1MiB/2008msec) 00:14:52.483 slat (usec): min=2, max=238, avg= 2.73, stdev= 2.99 00:14:52.483 clat (usec): min=2769, max=18583, avg=10256.88, stdev=870.40 00:14:52.483 lat (usec): min=2776, max=18585, avg=10259.60, stdev=870.18 00:14:52.483 clat percentiles (usec): 00:14:52.483 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:14:52.483 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:14:52.483 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11600], 00:14:52.483 | 99.00th=[12125], 99.50th=[12387], 99.90th=[17171], 99.95th=[17695], 00:14:52.483 | 99.99th=[18482] 00:14:52.483 bw ( KiB/s): min=25272, max=26624, per=99.89%, avg=26032.00, stdev=606.49, samples=4 00:14:52.483 iops : min= 6318, max= 6656, avg=6508.00, stdev=151.62, samples=4 00:14:52.483 write: IOPS=6524, BW=25.5MiB/s (26.7MB/s)(51.2MiB/2008msec); 0 zone resets 00:14:52.483 slat (usec): min=2, max=173, avg= 2.84, stdev= 2.19 00:14:52.483 clat (usec): min=1788, max=17527, avg=9313.26, stdev=804.80 00:14:52.483 lat (usec): min=1799, max=17529, avg=9316.10, stdev=804.69 00:14:52.483 clat percentiles (usec): 00:14:52.483 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:14:52.483 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:14:52.483 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:14:52.483 | 99.00th=[11076], 99.50th=[11338], 99.90th=[15139], 99.95th=[16581], 00:14:52.483 | 99.99th=[17433] 00:14:52.483 bw ( KiB/s): min=25856, max=26376, per=99.93%, avg=26082.00, stdev=231.99, samples=4 00:14:52.483 iops : min= 6464, max= 6594, avg=6520.50, stdev=58.00, samples=4 00:14:52.483 lat (msec) : 2=0.01%, 4=0.08%, 10=60.34%, 20=39.57% 00:14:52.484 cpu : usr=71.05%, sys=22.82%, ctx=3, majf=0, minf=14 00:14:52.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:52.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:52.484 issued rwts: total=13082,13102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:52.484 00:14:52.484 Run status group 0 (all jobs): 00:14:52.484 READ: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=51.1MiB (53.6MB), run=2008-2008msec 00:14:52.484 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=51.2MiB (53.7MB), run=2008-2008msec 00:14:52.484 04:29:55 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:52.484 04:29:55 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:14:52.742 04:29:55 -- host/fio.sh@64 -- # ls_nested_guid=36a490a3-4049-4686-83c9-0cb44fffbc22 00:14:52.742 04:29:55 -- host/fio.sh@65 -- # get_lvs_free_mb 36a490a3-4049-4686-83c9-0cb44fffbc22 00:14:52.742 04:29:55 -- common/autotest_common.sh@1353 -- # local lvs_uuid=36a490a3-4049-4686-83c9-0cb44fffbc22 00:14:52.742 04:29:55 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:52.742 04:29:55 -- common/autotest_common.sh@1355 -- # local fc 00:14:52.742 04:29:55 -- common/autotest_common.sh@1356 -- # local cs 00:14:52.742 04:29:55 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:53.001 04:29:56 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:53.001 { 00:14:53.001 "uuid": "eecce27a-bb69-499a-8d9a-b806ee47c93f", 00:14:53.001 "name": "lvs_0", 00:14:53.001 "base_bdev": "Nvme0n1", 00:14:53.001 "total_data_clusters": 4, 00:14:53.001 "free_clusters": 0, 00:14:53.001 "block_size": 4096, 00:14:53.001 "cluster_size": 1073741824 00:14:53.001 }, 00:14:53.001 { 00:14:53.001 "uuid": "36a490a3-4049-4686-83c9-0cb44fffbc22", 00:14:53.001 "name": "lvs_n_0", 00:14:53.001 "base_bdev": "66039700-73ac-43b3-a052-88825eb3b7cf", 00:14:53.001 "total_data_clusters": 1022, 00:14:53.001 "free_clusters": 1022, 00:14:53.001 "block_size": 4096, 00:14:53.001 "cluster_size": 4194304 00:14:53.001 } 00:14:53.001 ]' 00:14:53.001 04:29:56 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="36a490a3-4049-4686-83c9-0cb44fffbc22") .free_clusters' 00:14:53.001 04:29:56 -- common/autotest_common.sh@1358 -- # fc=1022 00:14:53.001 04:29:56 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="36a490a3-4049-4686-83c9-0cb44fffbc22") .cluster_size' 00:14:53.001 04:29:56 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:53.001 04:29:56 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:14:53.001 4088 00:14:53.001 04:29:56 -- common/autotest_common.sh@1363 -- # echo 4088 00:14:53.001 04:29:56 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:14:53.259 be354671-6a8f-4499-a3a1-0fc346f33d4e 00:14:53.259 04:29:56 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:14:53.516 04:29:56 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:14:53.773 04:29:56 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:54.030 04:29:57 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:54.030 04:29:57 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:54.030 04:29:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:54.030 04:29:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.030 04:29:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:54.030 04:29:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.030 04:29:57 -- common/autotest_common.sh@1330 -- # shift 00:14:54.030 04:29:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:54.030 04:29:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:54.031 04:29:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:54.031 04:29:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:54.031 04:29:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:54.031 04:29:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:54.031 04:29:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:54.031 04:29:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:54.288 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:54.288 fio-3.35 00:14:54.288 Starting 1 thread 00:14:56.820 00:14:56.820 test: (groupid=0, jobs=1): err= 0: pid=69754: Sat Dec 7 04:29:59 2024 00:14:56.820 read: IOPS=5860, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec) 00:14:56.820 slat (nsec): min=1966, max=162314, avg=2678.83, stdev=2594.40 00:14:56.820 clat (usec): min=3002, max=20269, avg=11445.94, stdev=964.78 00:14:56.820 lat (usec): min=3006, max=20272, avg=11448.62, stdev=964.63 00:14:56.820 clat percentiles (usec): 00:14:56.820 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:14:56.820 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:14:56.820 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:14:56.820 | 99.00th=[13566], 99.50th=[14091], 99.90th=[17695], 99.95th=[19006], 00:14:56.820 | 99.99th=[20317] 00:14:56.820 bw ( KiB/s): min=22864, max=23728, per=99.93%, avg=23424.00, stdev=396.46, samples=4 00:14:56.820 iops : min= 5716, max= 5932, avg=5856.00, stdev=99.12, samples=4 00:14:56.820 write: IOPS=5851, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2009msec); 0 zone resets 00:14:56.820 slat (usec): min=2, max=140, avg= 2.78, stdev= 2.08 00:14:56.820 clat (usec): min=1876, max=19303, avg=10371.62, stdev=916.13 00:14:56.820 lat (usec): min=1882, max=19305, avg=10374.40, stdev=916.09 00:14:56.820 clat percentiles (usec): 00:14:56.820 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:14:56.820 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:14:56.820 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:14:56.820 | 99.00th=[12387], 99.50th=[12780], 99.90th=[17433], 99.95th=[17957], 00:14:56.820 | 99.99th=[19268] 00:14:56.820 bw ( KiB/s): min=23200, max=23624, per=99.89%, avg=23378.00, stdev=199.67, samples=4 00:14:56.820 iops : min= 5800, max= 5906, avg=5844.50, stdev=49.92, samples=4 00:14:56.820 lat (msec) : 2=0.01%, 4=0.06%, 10=18.79%, 20=81.13%, 50=0.02% 00:14:56.820 cpu : usr=72.66%, sys=21.76%, ctx=3, majf=0, minf=14 00:14:56.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:14:56.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.820 issued rwts: total=11773,11755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.820 00:14:56.820 Run status group 0 (all jobs): 00:14:56.820 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2009-2009msec 00:14:56.820 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2009-2009msec 00:14:56.820 04:29:59 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:56.820 04:29:59 -- host/fio.sh@74 -- # sync 00:14:56.820 04:29:59 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:14:57.078 04:30:00 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:57.336 04:30:00 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:14:57.595 04:30:00 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:57.854 04:30:00 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:58.113 04:30:01 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:58.113 04:30:01 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:58.113 04:30:01 -- host/fio.sh@86 -- # nvmftestfini 00:14:58.113 04:30:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:58.113 04:30:01 -- nvmf/common.sh@116 -- # sync 00:14:58.113 04:30:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:58.113 04:30:01 -- nvmf/common.sh@119 -- # set +e 00:14:58.113 04:30:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:58.113 04:30:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:58.113 rmmod nvme_tcp 00:14:58.113 rmmod nvme_fabrics 00:14:58.113 rmmod nvme_keyring 00:14:58.113 04:30:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:58.113 04:30:01 -- nvmf/common.sh@123 -- # set -e 00:14:58.113 04:30:01 -- nvmf/common.sh@124 -- # return 0 00:14:58.113 04:30:01 -- nvmf/common.sh@477 -- # '[' -n 69434 ']' 00:14:58.113 04:30:01 -- nvmf/common.sh@478 -- # killprocess 69434 00:14:58.113 04:30:01 -- common/autotest_common.sh@936 -- # '[' -z 69434 ']' 00:14:58.113 04:30:01 -- common/autotest_common.sh@940 -- # kill -0 69434 00:14:58.113 04:30:01 -- common/autotest_common.sh@941 -- # uname 00:14:58.113 04:30:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.113 04:30:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69434 00:14:58.113 killing process with pid 69434 00:14:58.113 04:30:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:58.113 04:30:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:58.113 04:30:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69434' 00:14:58.113 04:30:01 -- common/autotest_common.sh@955 -- # kill 69434 00:14:58.113 04:30:01 -- common/autotest_common.sh@960 -- # wait 69434 00:14:58.370 04:30:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:58.370 04:30:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:58.370 04:30:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:58.370 04:30:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.370 04:30:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:58.370 04:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.370 04:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.370 04:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.370 04:30:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:58.370 ************************************ 00:14:58.370 END TEST nvmf_fio_host 00:14:58.370 ************************************ 00:14:58.370 00:14:58.370 real 0m19.141s 00:14:58.370 user 1m24.591s 00:14:58.370 sys 0m4.377s 00:14:58.370 04:30:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:58.370 04:30:01 -- common/autotest_common.sh@10 -- # set +x 00:14:58.370 04:30:01 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:58.370 04:30:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:58.370 04:30:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.370 04:30:01 -- common/autotest_common.sh@10 -- # set +x 00:14:58.370 ************************************ 00:14:58.370 START TEST nvmf_failover 00:14:58.370 ************************************ 00:14:58.370 04:30:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:58.628 * Looking for test storage... 00:14:58.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:58.628 04:30:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:58.628 04:30:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:58.628 04:30:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:58.628 04:30:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:58.628 04:30:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:58.628 04:30:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:58.628 04:30:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:58.628 04:30:01 -- scripts/common.sh@335 -- # IFS=.-: 00:14:58.628 04:30:01 -- scripts/common.sh@335 -- # read -ra ver1 00:14:58.628 04:30:01 -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.628 04:30:01 -- scripts/common.sh@336 -- # read -ra ver2 00:14:58.628 04:30:01 -- scripts/common.sh@337 -- # local 'op=<' 00:14:58.628 04:30:01 -- scripts/common.sh@339 -- # ver1_l=2 00:14:58.628 04:30:01 -- scripts/common.sh@340 -- # ver2_l=1 00:14:58.628 04:30:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:58.628 04:30:01 -- scripts/common.sh@343 -- # case "$op" in 00:14:58.628 04:30:01 -- scripts/common.sh@344 -- # : 1 00:14:58.628 04:30:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:58.628 04:30:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.628 04:30:01 -- scripts/common.sh@364 -- # decimal 1 00:14:58.628 04:30:01 -- scripts/common.sh@352 -- # local d=1 00:14:58.628 04:30:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.628 04:30:01 -- scripts/common.sh@354 -- # echo 1 00:14:58.628 04:30:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:58.629 04:30:01 -- scripts/common.sh@365 -- # decimal 2 00:14:58.629 04:30:01 -- scripts/common.sh@352 -- # local d=2 00:14:58.629 04:30:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.629 04:30:01 -- scripts/common.sh@354 -- # echo 2 00:14:58.629 04:30:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:58.629 04:30:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:58.629 04:30:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:58.629 04:30:01 -- scripts/common.sh@367 -- # return 0 00:14:58.629 04:30:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.629 04:30:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.629 --rc genhtml_branch_coverage=1 00:14:58.629 --rc genhtml_function_coverage=1 00:14:58.629 --rc genhtml_legend=1 00:14:58.629 --rc geninfo_all_blocks=1 00:14:58.629 --rc geninfo_unexecuted_blocks=1 00:14:58.629 00:14:58.629 ' 00:14:58.629 04:30:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.629 --rc genhtml_branch_coverage=1 00:14:58.629 --rc genhtml_function_coverage=1 00:14:58.629 --rc genhtml_legend=1 00:14:58.629 --rc geninfo_all_blocks=1 00:14:58.629 --rc geninfo_unexecuted_blocks=1 00:14:58.629 00:14:58.629 ' 00:14:58.629 04:30:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.629 --rc genhtml_branch_coverage=1 00:14:58.629 --rc genhtml_function_coverage=1 00:14:58.629 --rc genhtml_legend=1 00:14:58.629 --rc geninfo_all_blocks=1 00:14:58.629 --rc geninfo_unexecuted_blocks=1 00:14:58.629 00:14:58.629 ' 00:14:58.629 04:30:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:58.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.629 --rc genhtml_branch_coverage=1 00:14:58.629 --rc genhtml_function_coverage=1 00:14:58.629 --rc genhtml_legend=1 00:14:58.629 --rc geninfo_all_blocks=1 00:14:58.629 --rc geninfo_unexecuted_blocks=1 00:14:58.629 00:14:58.629 ' 00:14:58.629 04:30:01 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.629 04:30:01 -- nvmf/common.sh@7 -- # uname -s 00:14:58.629 04:30:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.629 04:30:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.629 04:30:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.629 04:30:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.629 04:30:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.629 04:30:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.629 04:30:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.629 04:30:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.629 04:30:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.629 04:30:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.629 04:30:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:14:58.629 04:30:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:14:58.629 04:30:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.629 04:30:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.629 04:30:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.629 04:30:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.629 04:30:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.629 04:30:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.629 04:30:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.629 04:30:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.629 04:30:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.629 04:30:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.629 04:30:01 -- paths/export.sh@5 -- # export PATH 00:14:58.629 04:30:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.629 04:30:01 -- nvmf/common.sh@46 -- # : 0 00:14:58.629 04:30:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:58.629 04:30:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:58.629 04:30:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:58.629 04:30:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.629 04:30:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.629 04:30:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:58.629 04:30:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:58.629 04:30:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:58.629 04:30:01 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.629 04:30:01 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.629 04:30:01 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.629 04:30:01 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.629 04:30:01 -- host/failover.sh@18 -- # nvmftestinit 00:14:58.629 04:30:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:58.629 04:30:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.629 04:30:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:58.629 04:30:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:58.629 04:30:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:58.629 04:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.629 04:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.629 04:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.629 04:30:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:58.629 04:30:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:58.629 04:30:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:58.629 04:30:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:58.629 04:30:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:58.629 04:30:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:58.629 04:30:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.629 04:30:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.629 04:30:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:58.629 04:30:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:58.629 04:30:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.629 04:30:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.629 04:30:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.629 04:30:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.629 04:30:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.629 04:30:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.629 04:30:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.629 04:30:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.629 04:30:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:58.629 04:30:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:58.629 Cannot find device "nvmf_tgt_br" 00:14:58.629 04:30:01 -- nvmf/common.sh@154 -- # true 00:14:58.629 04:30:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.629 Cannot find device "nvmf_tgt_br2" 00:14:58.629 04:30:01 -- nvmf/common.sh@155 -- # true 00:14:58.629 04:30:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:58.629 04:30:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:58.629 Cannot find device "nvmf_tgt_br" 00:14:58.629 04:30:01 -- nvmf/common.sh@157 -- # true 00:14:58.629 04:30:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:58.629 Cannot find device "nvmf_tgt_br2" 00:14:58.629 04:30:01 -- nvmf/common.sh@158 -- # true 00:14:58.629 04:30:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:58.887 04:30:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:58.887 04:30:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.887 04:30:01 -- nvmf/common.sh@161 -- # true 00:14:58.887 04:30:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.887 04:30:01 -- nvmf/common.sh@162 -- # true 00:14:58.887 04:30:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.887 04:30:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.887 04:30:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:58.887 04:30:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:58.887 04:30:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:58.887 04:30:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.887 04:30:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.887 04:30:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:58.887 04:30:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:58.887 04:30:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:58.887 04:30:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:58.887 04:30:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:58.887 04:30:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:58.887 04:30:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.887 04:30:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.887 04:30:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.887 04:30:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:58.887 04:30:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:58.887 04:30:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.887 04:30:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.887 04:30:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.887 04:30:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.887 04:30:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.887 04:30:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:58.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:58.887 00:14:58.887 --- 10.0.0.2 ping statistics --- 00:14:58.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.887 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:58.887 04:30:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:58.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:14:58.887 00:14:58.888 --- 10.0.0.3 ping statistics --- 00:14:58.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.888 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:58.888 04:30:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:14:58.888 00:14:58.888 --- 10.0.0.1 ping statistics --- 00:14:58.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.888 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:14:58.888 04:30:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.888 04:30:02 -- nvmf/common.sh@421 -- # return 0 00:14:58.888 04:30:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.888 04:30:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.888 04:30:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:58.888 04:30:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:58.888 04:30:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.888 04:30:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:58.888 04:30:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:59.146 04:30:02 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:59.146 04:30:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:59.146 04:30:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.146 04:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.146 04:30:02 -- nvmf/common.sh@469 -- # nvmfpid=69997 00:14:59.146 04:30:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:59.146 04:30:02 -- nvmf/common.sh@470 -- # waitforlisten 69997 00:14:59.146 04:30:02 -- common/autotest_common.sh@829 -- # '[' -z 69997 ']' 00:14:59.146 04:30:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.146 04:30:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.146 04:30:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.146 04:30:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.146 04:30:02 -- common/autotest_common.sh@10 -- # set +x 00:14:59.146 [2024-12-07 04:30:02.191864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:59.146 [2024-12-07 04:30:02.192164] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.146 [2024-12-07 04:30:02.329514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:59.404 [2024-12-07 04:30:02.385812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.404 [2024-12-07 04:30:02.386159] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.404 [2024-12-07 04:30:02.386240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.404 [2024-12-07 04:30:02.386342] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.404 [2024-12-07 04:30:02.386762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.404 [2024-12-07 04:30:02.386864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.404 [2024-12-07 04:30:02.386871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.973 04:30:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.973 04:30:03 -- common/autotest_common.sh@862 -- # return 0 00:14:59.973 04:30:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.973 04:30:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.973 04:30:03 -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 04:30:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.231 04:30:03 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.490 [2024-12-07 04:30:03.494498] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.490 04:30:03 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:00.748 Malloc0 00:15:00.748 04:30:03 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.005 04:30:04 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:01.263 04:30:04 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.521 [2024-12-07 04:30:04.642641] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.521 04:30:04 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:01.778 [2024-12-07 04:30:04.914987] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:01.778 04:30:04 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:02.045 [2024-12-07 04:30:05.143151] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:02.045 04:30:05 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:02.045 04:30:05 -- host/failover.sh@31 -- # bdevperf_pid=70060 00:15:02.045 04:30:05 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:02.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.045 04:30:05 -- host/failover.sh@34 -- # waitforlisten 70060 /var/tmp/bdevperf.sock 00:15:02.045 04:30:05 -- common/autotest_common.sh@829 -- # '[' -z 70060 ']' 00:15:02.045 04:30:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.045 04:30:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.045 04:30:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.045 04:30:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.045 04:30:05 -- common/autotest_common.sh@10 -- # set +x 00:15:02.304 04:30:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.304 04:30:05 -- common/autotest_common.sh@862 -- # return 0 00:15:02.304 04:30:05 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:02.561 NVMe0n1 00:15:02.819 04:30:05 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:03.077 00:15:03.077 04:30:06 -- host/failover.sh@39 -- # run_test_pid=70075 00:15:03.077 04:30:06 -- host/failover.sh@41 -- # sleep 1 00:15:03.077 04:30:06 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.011 04:30:07 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.272 [2024-12-07 04:30:07.387972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.272 [2024-12-07 04:30:07.388225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.272 [2024-12-07 04:30:07.388260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 [2024-12-07 04:30:07.388804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378d00 is same with the state(5) to be set 00:15:04.273 04:30:07 -- host/failover.sh@45 -- # sleep 3 00:15:07.580 04:30:10 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:07.580 00:15:07.580 04:30:10 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:07.838 [2024-12-07 04:30:10.982186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.838 [2024-12-07 04:30:10.982239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.838 [2024-12-07 04:30:10.982268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.838 [2024-12-07 04:30:10.982276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.838 [2024-12-07 04:30:10.982283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.838 [2024-12-07 04:30:10.982291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 [2024-12-07 04:30:10.982566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23793c0 is same with the state(5) to be set 00:15:07.839 04:30:11 -- host/failover.sh@50 -- # sleep 3 00:15:11.126 04:30:14 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.126 [2024-12-07 04:30:14.268023] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.126 04:30:14 -- host/failover.sh@55 -- # sleep 1 00:15:12.063 04:30:15 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:12.320 [2024-12-07 04:30:15.549289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.320 [2024-12-07 04:30:15.549438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23779f0 is same with the state(5) to be set 00:15:12.578 04:30:15 -- host/failover.sh@59 -- # wait 70075 00:15:19.147 0 00:15:19.147 04:30:21 -- host/failover.sh@61 -- # killprocess 70060 00:15:19.147 04:30:21 -- common/autotest_common.sh@936 -- # '[' -z 70060 ']' 00:15:19.147 04:30:21 -- common/autotest_common.sh@940 -- # kill -0 70060 00:15:19.148 04:30:21 -- common/autotest_common.sh@941 -- # uname 00:15:19.148 04:30:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.148 04:30:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70060 00:15:19.148 killing process with pid 70060 00:15:19.148 04:30:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.148 04:30:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.148 04:30:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70060' 00:15:19.148 04:30:21 -- common/autotest_common.sh@955 -- # kill 70060 00:15:19.148 04:30:21 -- common/autotest_common.sh@960 -- # wait 70060 00:15:19.148 04:30:21 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:19.148 [2024-12-07 04:30:05.209201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.148 [2024-12-07 04:30:05.209308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70060 ] 00:15:19.148 [2024-12-07 04:30:05.345180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.148 [2024-12-07 04:30:05.401554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.148 Running I/O for 15 seconds... 00:15:19.148 [2024-12-07 04:30:07.388875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.388939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.388967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.388998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.389982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.389996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.390009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.390024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.390037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.390052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.148 [2024-12-07 04:30:07.390076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.148 [2024-12-07 04:30:07.390097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.390961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.390984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.390999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.391274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.149 [2024-12-07 04:30:07.391302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.149 [2024-12-07 04:30:07.391316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.149 [2024-12-07 04:30:07.391329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.391439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.391469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.391499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.391529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.391589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.391618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.391797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.391981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.391997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.392167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.392259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.392315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.392343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.392371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.150 [2024-12-07 04:30:07.392454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.150 [2024-12-07 04:30:07.392538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.150 [2024-12-07 04:30:07.392553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.151 [2024-12-07 04:30:07.392572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.151 [2024-12-07 04:30:07.392628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.151 [2024-12-07 04:30:07.392744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.392966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.392990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:07.393007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.151 [2024-12-07 04:30:07.393037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.151 [2024-12-07 04:30:07.393067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1792970 is same with the state(5) to be set 00:15:19.151 [2024-12-07 04:30:07.393099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:19.151 [2024-12-07 04:30:07.393110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:19.151 [2024-12-07 04:30:07.393121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127336 len:8 PRP1 0x0 PRP2 0x0 00:15:19.151 [2024-12-07 04:30:07.393134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393182] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1792970 was disconnected and freed. reset controller. 00:15:19.151 [2024-12-07 04:30:07.393201] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:19.151 [2024-12-07 04:30:07.393256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.151 [2024-12-07 04:30:07.393279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.151 [2024-12-07 04:30:07.393308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.151 [2024-12-07 04:30:07.393339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.151 [2024-12-07 04:30:07.393367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:07.393380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:19.151 [2024-12-07 04:30:07.395824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:19.151 [2024-12-07 04:30:07.395864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172f690 (9): Bad file descriptor 00:15:19.151 [2024-12-07 04:30:07.429344] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:19.151 [2024-12-07 04:30:10.982631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.982977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.982992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.983006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.983021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.983034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.983049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.983062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.983077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.983091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.983120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.983149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.983165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.983187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.983205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.151 [2024-12-07 04:30:10.983219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.151 [2024-12-07 04:30:10.983234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.152 [2024-12-07 04:30:10.983704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.152 [2024-12-07 04:30:10.983806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.152 [2024-12-07 04:30:10.983863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.152 [2024-12-07 04:30:10.983917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.152 [2024-12-07 04:30:10.983944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.983985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.983997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.152 [2024-12-07 04:30:10.984140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.152 [2024-12-07 04:30:10.984191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.152 [2024-12-07 04:30:10.984290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.152 [2024-12-07 04:30:10.984303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.984878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.984973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.984986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.985259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.985343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.985398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.153 [2024-12-07 04:30:10.985426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.153 [2024-12-07 04:30:10.985440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.153 [2024-12-07 04:30:10.985454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.985482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.985560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.985587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.985680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.985737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.985764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.985976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.985989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.986150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.986204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.986266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.986292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.986427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.986453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.154 [2024-12-07 04:30:10.986507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.154 [2024-12-07 04:30:10.986561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.154 [2024-12-07 04:30:10.986581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777450 is same with the state(5) to be set 00:15:19.155 [2024-12-07 04:30:10.986597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:19.155 [2024-12-07 04:30:10.986607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:19.155 [2024-12-07 04:30:10.986617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129176 len:8 PRP1 0x0 PRP2 0x0 00:15:19.155 [2024-12-07 04:30:10.986629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:10.986701] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1777450 was disconnected and freed. reset controller. 00:15:19.155 [2024-12-07 04:30:10.986720] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:19.155 [2024-12-07 04:30:10.986771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.155 [2024-12-07 04:30:10.986793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:10.986807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.155 [2024-12-07 04:30:10.986820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:10.986833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.155 [2024-12-07 04:30:10.986846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:10.986860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.155 [2024-12-07 04:30:10.986872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:10.986885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:19.155 [2024-12-07 04:30:10.986931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172f690 (9): Bad file descriptor 00:15:19.155 [2024-12-07 04:30:10.989447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:19.155 [2024-12-07 04:30:11.024838] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:19.155 [2024-12-07 04:30:15.549498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.549960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.549973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.155 [2024-12-07 04:30:15.550017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.155 [2024-12-07 04:30:15.550047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.155 [2024-12-07 04:30:15.550486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.155 [2024-12-07 04:30:15.550500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.550625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.550750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.550779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.550971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.550987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.156 [2024-12-07 04:30:15.551772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.156 [2024-12-07 04:30:15.551803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.156 [2024-12-07 04:30:15.551818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.551832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.551847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.551861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.551876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.551889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.551904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.551919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.551935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.551949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.551964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.551978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.551993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.157 [2024-12-07 04:30:15.552884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.157 [2024-12-07 04:30:15.552929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.157 [2024-12-07 04:30:15.552942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.552957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.552971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.552986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.158 [2024-12-07 04:30:15.553011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.158 [2024-12-07 04:30:15.553331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.158 [2024-12-07 04:30:15.553390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.158 [2024-12-07 04:30:15.553486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.158 [2024-12-07 04:30:15.553705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a48e0 is same with the state(5) to be set 00:15:19.158 [2024-12-07 04:30:15.553740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:19.158 [2024-12-07 04:30:15.553751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:19.158 [2024-12-07 04:30:15.553761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112576 len:8 PRP1 0x0 PRP2 0x0 00:15:19.158 [2024-12-07 04:30:15.553773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553819] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17a48e0 was disconnected and freed. reset controller. 00:15:19.158 [2024-12-07 04:30:15.553845] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:19.158 [2024-12-07 04:30:15.553898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.158 [2024-12-07 04:30:15.553919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.158 [2024-12-07 04:30:15.553946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.158 [2024-12-07 04:30:15.553976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.553990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.158 [2024-12-07 04:30:15.554002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.158 [2024-12-07 04:30:15.554015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:19.158 [2024-12-07 04:30:15.554060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172f690 (9): Bad file descriptor 00:15:19.158 [2024-12-07 04:30:15.556549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:19.158 [2024-12-07 04:30:15.590328] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:19.158 00:15:19.158 Latency(us) 00:15:19.158 [2024-12-07T04:30:22.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.158 [2024-12-07T04:30:22.398Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:19.158 Verification LBA range: start 0x0 length 0x4000 00:15:19.158 NVMe0n1 : 15.01 13469.98 52.62 355.38 0.00 9240.36 465.45 14537.08 00:15:19.158 [2024-12-07T04:30:22.398Z] =================================================================================================================== 00:15:19.158 [2024-12-07T04:30:22.398Z] Total : 13469.98 52.62 355.38 0.00 9240.36 465.45 14537.08 00:15:19.158 Received shutdown signal, test time was about 15.000000 seconds 00:15:19.158 00:15:19.158 Latency(us) 00:15:19.158 [2024-12-07T04:30:22.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.158 [2024-12-07T04:30:22.398Z] =================================================================================================================== 00:15:19.158 [2024-12-07T04:30:22.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.158 04:30:21 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:19.158 04:30:21 -- host/failover.sh@65 -- # count=3 00:15:19.158 04:30:21 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:19.158 04:30:21 -- host/failover.sh@73 -- # bdevperf_pid=70249 00:15:19.158 04:30:21 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:19.158 04:30:21 -- host/failover.sh@75 -- # waitforlisten 70249 /var/tmp/bdevperf.sock 00:15:19.159 04:30:21 -- common/autotest_common.sh@829 -- # '[' -z 70249 ']' 00:15:19.159 04:30:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.159 04:30:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.159 04:30:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.159 04:30:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.159 04:30:21 -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 04:30:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.417 04:30:22 -- common/autotest_common.sh@862 -- # return 0 00:15:19.417 04:30:22 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:19.675 [2024-12-07 04:30:22.867066] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:19.675 04:30:22 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:19.934 [2024-12-07 04:30:23.115324] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:19.934 04:30:23 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.193 NVMe0n1 00:15:20.455 04:30:23 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.722 00:15:20.722 04:30:23 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.981 00:15:20.981 04:30:24 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:20.981 04:30:24 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:21.240 04:30:24 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:21.498 04:30:24 -- host/failover.sh@87 -- # sleep 3 00:15:24.777 04:30:27 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:24.777 04:30:27 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:24.777 04:30:27 -- host/failover.sh@90 -- # run_test_pid=70330 00:15:24.777 04:30:27 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:24.777 04:30:27 -- host/failover.sh@92 -- # wait 70330 00:15:26.152 0 00:15:26.152 04:30:29 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:26.152 [2024-12-07 04:30:21.546167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:26.152 [2024-12-07 04:30:21.546274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70249 ] 00:15:26.152 [2024-12-07 04:30:21.688624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.152 [2024-12-07 04:30:21.743582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.152 [2024-12-07 04:30:24.591720] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:26.152 [2024-12-07 04:30:24.591900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.152 [2024-12-07 04:30:24.591925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.153 [2024-12-07 04:30:24.591944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.153 [2024-12-07 04:30:24.591957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.153 [2024-12-07 04:30:24.591971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.153 [2024-12-07 04:30:24.591984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.153 [2024-12-07 04:30:24.591998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.153 [2024-12-07 04:30:24.592025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.153 [2024-12-07 04:30:24.592038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:26.153 [2024-12-07 04:30:24.592087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:26.153 [2024-12-07 04:30:24.592119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2027690 (9): Bad file descriptor 00:15:26.153 [2024-12-07 04:30:24.600969] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:26.153 Running I/O for 1 seconds... 00:15:26.153 00:15:26.153 Latency(us) 00:15:26.153 [2024-12-07T04:30:29.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.153 [2024-12-07T04:30:29.393Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.153 Verification LBA range: start 0x0 length 0x4000 00:15:26.153 NVMe0n1 : 1.01 13486.84 52.68 0.00 0.00 9444.07 1079.85 12153.95 00:15:26.153 [2024-12-07T04:30:29.393Z] =================================================================================================================== 00:15:26.153 [2024-12-07T04:30:29.393Z] Total : 13486.84 52.68 0.00 0.00 9444.07 1079.85 12153.95 00:15:26.153 04:30:29 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:26.153 04:30:29 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:26.153 04:30:29 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:26.720 04:30:29 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:26.720 04:30:29 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:26.720 04:30:29 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:26.978 04:30:30 -- host/failover.sh@101 -- # sleep 3 00:15:30.268 04:30:33 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:30.268 04:30:33 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:30.268 04:30:33 -- host/failover.sh@108 -- # killprocess 70249 00:15:30.268 04:30:33 -- common/autotest_common.sh@936 -- # '[' -z 70249 ']' 00:15:30.268 04:30:33 -- common/autotest_common.sh@940 -- # kill -0 70249 00:15:30.268 04:30:33 -- common/autotest_common.sh@941 -- # uname 00:15:30.268 04:30:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:30.268 04:30:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70249 00:15:30.528 killing process with pid 70249 00:15:30.528 04:30:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:30.528 04:30:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:30.528 04:30:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70249' 00:15:30.528 04:30:33 -- common/autotest_common.sh@955 -- # kill 70249 00:15:30.528 04:30:33 -- common/autotest_common.sh@960 -- # wait 70249 00:15:30.528 04:30:33 -- host/failover.sh@110 -- # sync 00:15:30.528 04:30:33 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.787 04:30:33 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:30.787 04:30:33 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:30.787 04:30:33 -- host/failover.sh@116 -- # nvmftestfini 00:15:30.787 04:30:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:30.787 04:30:33 -- nvmf/common.sh@116 -- # sync 00:15:30.787 04:30:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:30.787 04:30:34 -- nvmf/common.sh@119 -- # set +e 00:15:30.787 04:30:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:30.787 04:30:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:30.787 rmmod nvme_tcp 00:15:31.046 rmmod nvme_fabrics 00:15:31.046 rmmod nvme_keyring 00:15:31.046 04:30:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:31.046 04:30:34 -- nvmf/common.sh@123 -- # set -e 00:15:31.046 04:30:34 -- nvmf/common.sh@124 -- # return 0 00:15:31.046 04:30:34 -- nvmf/common.sh@477 -- # '[' -n 69997 ']' 00:15:31.046 04:30:34 -- nvmf/common.sh@478 -- # killprocess 69997 00:15:31.046 04:30:34 -- common/autotest_common.sh@936 -- # '[' -z 69997 ']' 00:15:31.046 04:30:34 -- common/autotest_common.sh@940 -- # kill -0 69997 00:15:31.046 04:30:34 -- common/autotest_common.sh@941 -- # uname 00:15:31.046 04:30:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.046 04:30:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69997 00:15:31.046 killing process with pid 69997 00:15:31.046 04:30:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:31.046 04:30:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:31.046 04:30:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69997' 00:15:31.046 04:30:34 -- common/autotest_common.sh@955 -- # kill 69997 00:15:31.046 04:30:34 -- common/autotest_common.sh@960 -- # wait 69997 00:15:31.046 04:30:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:31.046 04:30:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:31.046 04:30:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:31.046 04:30:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.046 04:30:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:31.046 04:30:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.046 04:30:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.046 04:30:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.306 04:30:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:31.306 00:15:31.306 real 0m32.733s 00:15:31.306 user 2m7.159s 00:15:31.306 sys 0m5.427s 00:15:31.306 04:30:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:31.306 ************************************ 00:15:31.306 END TEST nvmf_failover 00:15:31.306 ************************************ 00:15:31.306 04:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.306 04:30:34 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:31.306 04:30:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:31.306 04:30:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:31.306 04:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.306 ************************************ 00:15:31.306 START TEST nvmf_discovery 00:15:31.306 ************************************ 00:15:31.306 04:30:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:31.306 * Looking for test storage... 00:15:31.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:31.306 04:30:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:31.306 04:30:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:31.306 04:30:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:31.306 04:30:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:31.306 04:30:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:31.306 04:30:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:31.306 04:30:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:31.306 04:30:34 -- scripts/common.sh@335 -- # IFS=.-: 00:15:31.306 04:30:34 -- scripts/common.sh@335 -- # read -ra ver1 00:15:31.306 04:30:34 -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.306 04:30:34 -- scripts/common.sh@336 -- # read -ra ver2 00:15:31.306 04:30:34 -- scripts/common.sh@337 -- # local 'op=<' 00:15:31.306 04:30:34 -- scripts/common.sh@339 -- # ver1_l=2 00:15:31.306 04:30:34 -- scripts/common.sh@340 -- # ver2_l=1 00:15:31.306 04:30:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:31.306 04:30:34 -- scripts/common.sh@343 -- # case "$op" in 00:15:31.306 04:30:34 -- scripts/common.sh@344 -- # : 1 00:15:31.306 04:30:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:31.306 04:30:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.306 04:30:34 -- scripts/common.sh@364 -- # decimal 1 00:15:31.306 04:30:34 -- scripts/common.sh@352 -- # local d=1 00:15:31.306 04:30:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.306 04:30:34 -- scripts/common.sh@354 -- # echo 1 00:15:31.306 04:30:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:31.306 04:30:34 -- scripts/common.sh@365 -- # decimal 2 00:15:31.306 04:30:34 -- scripts/common.sh@352 -- # local d=2 00:15:31.306 04:30:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.306 04:30:34 -- scripts/common.sh@354 -- # echo 2 00:15:31.306 04:30:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:31.306 04:30:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:31.306 04:30:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:31.306 04:30:34 -- scripts/common.sh@367 -- # return 0 00:15:31.306 04:30:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.306 04:30:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:31.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.306 --rc genhtml_branch_coverage=1 00:15:31.306 --rc genhtml_function_coverage=1 00:15:31.306 --rc genhtml_legend=1 00:15:31.306 --rc geninfo_all_blocks=1 00:15:31.306 --rc geninfo_unexecuted_blocks=1 00:15:31.306 00:15:31.306 ' 00:15:31.306 04:30:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:31.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.306 --rc genhtml_branch_coverage=1 00:15:31.306 --rc genhtml_function_coverage=1 00:15:31.306 --rc genhtml_legend=1 00:15:31.306 --rc geninfo_all_blocks=1 00:15:31.306 --rc geninfo_unexecuted_blocks=1 00:15:31.306 00:15:31.306 ' 00:15:31.306 04:30:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:31.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.306 --rc genhtml_branch_coverage=1 00:15:31.306 --rc genhtml_function_coverage=1 00:15:31.306 --rc genhtml_legend=1 00:15:31.306 --rc geninfo_all_blocks=1 00:15:31.306 --rc geninfo_unexecuted_blocks=1 00:15:31.306 00:15:31.306 ' 00:15:31.306 04:30:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:31.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.306 --rc genhtml_branch_coverage=1 00:15:31.306 --rc genhtml_function_coverage=1 00:15:31.306 --rc genhtml_legend=1 00:15:31.306 --rc geninfo_all_blocks=1 00:15:31.306 --rc geninfo_unexecuted_blocks=1 00:15:31.306 00:15:31.306 ' 00:15:31.306 04:30:34 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.306 04:30:34 -- nvmf/common.sh@7 -- # uname -s 00:15:31.306 04:30:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.306 04:30:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.306 04:30:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.306 04:30:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.306 04:30:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.306 04:30:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.306 04:30:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.306 04:30:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.306 04:30:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.306 04:30:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.567 04:30:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:15:31.567 04:30:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:15:31.567 04:30:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.567 04:30:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.567 04:30:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.567 04:30:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.567 04:30:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.567 04:30:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.567 04:30:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.567 04:30:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.567 04:30:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.567 04:30:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.567 04:30:34 -- paths/export.sh@5 -- # export PATH 00:15:31.567 04:30:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.567 04:30:34 -- nvmf/common.sh@46 -- # : 0 00:15:31.567 04:30:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:31.567 04:30:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:31.567 04:30:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:31.567 04:30:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.567 04:30:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.567 04:30:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:31.567 04:30:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:31.567 04:30:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:31.567 04:30:34 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:31.567 04:30:34 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:31.567 04:30:34 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:31.567 04:30:34 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:31.567 04:30:34 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:31.567 04:30:34 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:31.567 04:30:34 -- host/discovery.sh@25 -- # nvmftestinit 00:15:31.567 04:30:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:31.567 04:30:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.567 04:30:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:31.567 04:30:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:31.567 04:30:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:31.567 04:30:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.567 04:30:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.567 04:30:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.567 04:30:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:31.567 04:30:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:31.567 04:30:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:31.567 04:30:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:31.567 04:30:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:31.567 04:30:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:31.567 04:30:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.567 04:30:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.567 04:30:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.567 04:30:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:31.567 04:30:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.567 04:30:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.567 04:30:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.567 04:30:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.567 04:30:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.567 04:30:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.567 04:30:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.567 04:30:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.567 04:30:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:31.567 04:30:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:31.567 Cannot find device "nvmf_tgt_br" 00:15:31.567 04:30:34 -- nvmf/common.sh@154 -- # true 00:15:31.567 04:30:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.567 Cannot find device "nvmf_tgt_br2" 00:15:31.567 04:30:34 -- nvmf/common.sh@155 -- # true 00:15:31.567 04:30:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:31.567 04:30:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:31.567 Cannot find device "nvmf_tgt_br" 00:15:31.567 04:30:34 -- nvmf/common.sh@157 -- # true 00:15:31.567 04:30:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:31.567 Cannot find device "nvmf_tgt_br2" 00:15:31.567 04:30:34 -- nvmf/common.sh@158 -- # true 00:15:31.567 04:30:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:31.567 04:30:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:31.567 04:30:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.567 04:30:34 -- nvmf/common.sh@161 -- # true 00:15:31.567 04:30:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.567 04:30:34 -- nvmf/common.sh@162 -- # true 00:15:31.567 04:30:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.567 04:30:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.567 04:30:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.567 04:30:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.567 04:30:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.567 04:30:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.567 04:30:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.567 04:30:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:31.567 04:30:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:31.567 04:30:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:31.567 04:30:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:31.567 04:30:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:31.567 04:30:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:31.567 04:30:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.567 04:30:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.567 04:30:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.567 04:30:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:31.567 04:30:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:31.567 04:30:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.567 04:30:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.827 04:30:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.827 04:30:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.827 04:30:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.827 04:30:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:31.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:31.827 00:15:31.827 --- 10.0.0.2 ping statistics --- 00:15:31.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.827 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:31.827 04:30:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:31.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:31.827 00:15:31.827 --- 10.0.0.3 ping statistics --- 00:15:31.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.827 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:31.827 04:30:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:31.827 00:15:31.827 --- 10.0.0.1 ping statistics --- 00:15:31.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.827 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:31.827 04:30:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.827 04:30:34 -- nvmf/common.sh@421 -- # return 0 00:15:31.827 04:30:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.827 04:30:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.827 04:30:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.827 04:30:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.827 04:30:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.827 04:30:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.827 04:30:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.827 04:30:34 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:31.827 04:30:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.827 04:30:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.827 04:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.827 04:30:34 -- nvmf/common.sh@469 -- # nvmfpid=70616 00:15:31.827 04:30:34 -- nvmf/common.sh@470 -- # waitforlisten 70616 00:15:31.827 04:30:34 -- common/autotest_common.sh@829 -- # '[' -z 70616 ']' 00:15:31.827 04:30:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.827 04:30:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:31.827 04:30:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.827 04:30:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.827 04:30:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.827 04:30:34 -- common/autotest_common.sh@10 -- # set +x 00:15:31.827 [2024-12-07 04:30:34.921614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:31.827 [2024-12-07 04:30:34.921717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.827 [2024-12-07 04:30:35.059926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.085 [2024-12-07 04:30:35.113586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:32.085 [2024-12-07 04:30:35.113804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.085 [2024-12-07 04:30:35.113819] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.085 [2024-12-07 04:30:35.113829] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.085 [2024-12-07 04:30:35.113854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.020 04:30:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.020 04:30:35 -- common/autotest_common.sh@862 -- # return 0 00:15:33.020 04:30:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:33.020 04:30:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.020 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.020 04:30:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.020 04:30:35 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.020 04:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.020 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.020 [2024-12-07 04:30:35.977164] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.020 04:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.020 04:30:35 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:33.020 04:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.020 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.020 [2024-12-07 04:30:35.985216] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:33.020 04:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.020 04:30:35 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:33.020 04:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.020 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.020 null0 00:15:33.020 04:30:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.020 04:30:35 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:33.020 04:30:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.020 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.020 null1 00:15:33.020 04:30:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.020 04:30:36 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:33.020 04:30:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.020 04:30:36 -- common/autotest_common.sh@10 -- # set +x 00:15:33.020 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:33.020 04:30:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.020 04:30:36 -- host/discovery.sh@45 -- # hostpid=70650 00:15:33.020 04:30:36 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:33.020 04:30:36 -- host/discovery.sh@46 -- # waitforlisten 70650 /tmp/host.sock 00:15:33.020 04:30:36 -- common/autotest_common.sh@829 -- # '[' -z 70650 ']' 00:15:33.020 04:30:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:33.020 04:30:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.020 04:30:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:33.020 04:30:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.020 04:30:36 -- common/autotest_common.sh@10 -- # set +x 00:15:33.020 [2024-12-07 04:30:36.073694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:33.020 [2024-12-07 04:30:36.073805] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70650 ] 00:15:33.020 [2024-12-07 04:30:36.208357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.278 [2024-12-07 04:30:36.262574] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:33.278 [2024-12-07 04:30:36.262761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.887 04:30:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.887 04:30:37 -- common/autotest_common.sh@862 -- # return 0 00:15:33.887 04:30:37 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.887 04:30:37 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:33.887 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.887 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:33.887 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.887 04:30:37 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:33.887 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.887 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:33.887 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.887 04:30:37 -- host/discovery.sh@72 -- # notify_id=0 00:15:33.887 04:30:37 -- host/discovery.sh@78 -- # get_subsystem_names 00:15:33.887 04:30:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.887 04:30:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.887 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.887 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:33.887 04:30:37 -- host/discovery.sh@59 -- # xargs 00:15:33.887 04:30:37 -- host/discovery.sh@59 -- # sort 00:15:33.887 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.887 04:30:37 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:15:33.887 04:30:37 -- host/discovery.sh@79 -- # get_bdev_list 00:15:33.887 04:30:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.887 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.887 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:33.887 04:30:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.887 04:30:37 -- host/discovery.sh@55 -- # sort 00:15:33.887 04:30:37 -- host/discovery.sh@55 -- # xargs 00:15:33.887 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.146 04:30:37 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:15:34.146 04:30:37 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:34.146 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.146 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.146 04:30:37 -- host/discovery.sh@82 -- # get_subsystem_names 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.146 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.146 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # sort 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # xargs 00:15:34.146 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.146 04:30:37 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:15:34.146 04:30:37 -- host/discovery.sh@83 -- # get_bdev_list 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.146 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.146 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # sort 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # xargs 00:15:34.146 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.146 04:30:37 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:34.146 04:30:37 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:34.146 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.146 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.146 04:30:37 -- host/discovery.sh@86 -- # get_subsystem_names 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.146 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.146 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # sort 00:15:34.146 04:30:37 -- host/discovery.sh@59 -- # xargs 00:15:34.146 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.146 04:30:37 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:15:34.146 04:30:37 -- host/discovery.sh@87 -- # get_bdev_list 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.146 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.146 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # sort 00:15:34.146 04:30:37 -- host/discovery.sh@55 -- # xargs 00:15:34.146 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.406 04:30:37 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:34.406 04:30:37 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:34.406 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.406 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.406 [2024-12-07 04:30:37.421759] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.406 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.406 04:30:37 -- host/discovery.sh@92 -- # get_subsystem_names 00:15:34.406 04:30:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.406 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.406 04:30:37 -- host/discovery.sh@59 -- # sort 00:15:34.406 04:30:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.406 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.406 04:30:37 -- host/discovery.sh@59 -- # xargs 00:15:34.406 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.406 04:30:37 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:34.406 04:30:37 -- host/discovery.sh@93 -- # get_bdev_list 00:15:34.406 04:30:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.406 04:30:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.406 04:30:37 -- host/discovery.sh@55 -- # sort 00:15:34.406 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.406 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.406 04:30:37 -- host/discovery.sh@55 -- # xargs 00:15:34.406 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.406 04:30:37 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:15:34.406 04:30:37 -- host/discovery.sh@94 -- # get_notification_count 00:15:34.406 04:30:37 -- host/discovery.sh@74 -- # jq '. | length' 00:15:34.406 04:30:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:34.406 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.406 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.406 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.406 04:30:37 -- host/discovery.sh@74 -- # notification_count=0 00:15:34.406 04:30:37 -- host/discovery.sh@75 -- # notify_id=0 00:15:34.406 04:30:37 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:15:34.406 04:30:37 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:34.406 04:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.406 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.406 04:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.406 04:30:37 -- host/discovery.sh@100 -- # sleep 1 00:15:34.975 [2024-12-07 04:30:38.059893] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:34.975 [2024-12-07 04:30:38.059950] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:34.975 [2024-12-07 04:30:38.059969] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:34.975 [2024-12-07 04:30:38.065928] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:34.975 [2024-12-07 04:30:38.121594] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:34.975 [2024-12-07 04:30:38.121621] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:35.543 04:30:38 -- host/discovery.sh@101 -- # get_subsystem_names 00:15:35.543 04:30:38 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.543 04:30:38 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.543 04:30:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.543 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.543 04:30:38 -- host/discovery.sh@59 -- # xargs 00:15:35.543 04:30:38 -- host/discovery.sh@59 -- # sort 00:15:35.543 04:30:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.543 04:30:38 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.543 04:30:38 -- host/discovery.sh@102 -- # get_bdev_list 00:15:35.543 04:30:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.543 04:30:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.543 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.543 04:30:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.543 04:30:38 -- host/discovery.sh@55 -- # sort 00:15:35.543 04:30:38 -- host/discovery.sh@55 -- # xargs 00:15:35.543 04:30:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.543 04:30:38 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:35.543 04:30:38 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:15:35.543 04:30:38 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:35.543 04:30:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.543 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.543 04:30:38 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:35.543 04:30:38 -- host/discovery.sh@63 -- # sort -n 00:15:35.543 04:30:38 -- host/discovery.sh@63 -- # xargs 00:15:35.543 04:30:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.543 04:30:38 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:15:35.543 04:30:38 -- host/discovery.sh@104 -- # get_notification_count 00:15:35.543 04:30:38 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:35.543 04:30:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.543 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.543 04:30:38 -- host/discovery.sh@74 -- # jq '. | length' 00:15:35.543 04:30:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.802 04:30:38 -- host/discovery.sh@74 -- # notification_count=1 00:15:35.802 04:30:38 -- host/discovery.sh@75 -- # notify_id=1 00:15:35.802 04:30:38 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:15:35.802 04:30:38 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:35.802 04:30:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.802 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.802 04:30:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.802 04:30:38 -- host/discovery.sh@109 -- # sleep 1 00:15:36.741 04:30:39 -- host/discovery.sh@110 -- # get_bdev_list 00:15:36.741 04:30:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:36.741 04:30:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.741 04:30:39 -- host/discovery.sh@55 -- # xargs 00:15:36.741 04:30:39 -- host/discovery.sh@55 -- # sort 00:15:36.741 04:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.741 04:30:39 -- common/autotest_common.sh@10 -- # set +x 00:15:36.741 04:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.741 04:30:39 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:36.741 04:30:39 -- host/discovery.sh@111 -- # get_notification_count 00:15:36.741 04:30:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:36.741 04:30:39 -- host/discovery.sh@74 -- # jq '. | length' 00:15:36.741 04:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.741 04:30:39 -- common/autotest_common.sh@10 -- # set +x 00:15:36.741 04:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.741 04:30:39 -- host/discovery.sh@74 -- # notification_count=1 00:15:36.741 04:30:39 -- host/discovery.sh@75 -- # notify_id=2 00:15:36.741 04:30:39 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:15:36.741 04:30:39 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:36.741 04:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.741 04:30:39 -- common/autotest_common.sh@10 -- # set +x 00:15:36.741 [2024-12-07 04:30:39.912606] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:36.741 [2024-12-07 04:30:39.913305] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:36.741 [2024-12-07 04:30:39.913358] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:36.741 04:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.741 04:30:39 -- host/discovery.sh@117 -- # sleep 1 00:15:36.741 [2024-12-07 04:30:39.919294] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:36.999 [2024-12-07 04:30:39.983584] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:36.999 [2024-12-07 04:30:39.983627] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:36.999 [2024-12-07 04:30:39.983635] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:37.935 04:30:40 -- host/discovery.sh@118 -- # get_subsystem_names 00:15:37.935 04:30:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:37.935 04:30:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:37.935 04:30:40 -- host/discovery.sh@59 -- # sort 00:15:37.935 04:30:40 -- host/discovery.sh@59 -- # xargs 00:15:37.935 04:30:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.935 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:15:37.935 04:30:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.935 04:30:40 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.935 04:30:40 -- host/discovery.sh@119 -- # get_bdev_list 00:15:37.935 04:30:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.935 04:30:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.935 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:15:37.935 04:30:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.935 04:30:40 -- host/discovery.sh@55 -- # xargs 00:15:37.935 04:30:40 -- host/discovery.sh@55 -- # sort 00:15:37.935 04:30:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.935 04:30:41 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.935 04:30:41 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:15:37.935 04:30:41 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:37.935 04:30:41 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:37.935 04:30:41 -- host/discovery.sh@63 -- # sort -n 00:15:37.935 04:30:41 -- host/discovery.sh@63 -- # xargs 00:15:37.935 04:30:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.935 04:30:41 -- common/autotest_common.sh@10 -- # set +x 00:15:37.935 04:30:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.935 04:30:41 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:37.935 04:30:41 -- host/discovery.sh@121 -- # get_notification_count 00:15:37.935 04:30:41 -- host/discovery.sh@74 -- # jq '. | length' 00:15:37.935 04:30:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:37.935 04:30:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.935 04:30:41 -- common/autotest_common.sh@10 -- # set +x 00:15:37.935 04:30:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.935 04:30:41 -- host/discovery.sh@74 -- # notification_count=0 00:15:37.935 04:30:41 -- host/discovery.sh@75 -- # notify_id=2 00:15:37.935 04:30:41 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:15:37.935 04:30:41 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:37.935 04:30:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.935 04:30:41 -- common/autotest_common.sh@10 -- # set +x 00:15:37.935 [2024-12-07 04:30:41.130909] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:37.935 [2024-12-07 04:30:41.130947] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:37.935 04:30:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.935 04:30:41 -- host/discovery.sh@127 -- # sleep 1 00:15:37.935 [2024-12-07 04:30:41.136909] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:37.935 [2024-12-07 04:30:41.136946] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:37.935 [2024-12-07 04:30:41.137061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.935 [2024-12-07 04:30:41.137121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.935 [2024-12-07 04:30:41.137148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.935 [2024-12-07 04:30:41.137157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.935 [2024-12-07 04:30:41.137167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.935 [2024-12-07 04:30:41.137175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.935 [2024-12-07 04:30:41.137184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.935 [2024-12-07 04:30:41.137192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.935 [2024-12-07 04:30:41.137201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215ac10 is same with the state(5) to be set 00:15:39.313 04:30:42 -- host/discovery.sh@128 -- # get_subsystem_names 00:15:39.313 04:30:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:39.313 04:30:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:39.313 04:30:42 -- host/discovery.sh@59 -- # sort 00:15:39.313 04:30:42 -- host/discovery.sh@59 -- # xargs 00:15:39.313 04:30:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.313 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:15:39.313 04:30:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@129 -- # get_bdev_list 00:15:39.313 04:30:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:39.313 04:30:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:39.313 04:30:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.313 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:15:39.313 04:30:42 -- host/discovery.sh@55 -- # xargs 00:15:39.313 04:30:42 -- host/discovery.sh@55 -- # sort 00:15:39.313 04:30:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:15:39.313 04:30:42 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:39.313 04:30:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.313 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:15:39.313 04:30:42 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:39.313 04:30:42 -- host/discovery.sh@63 -- # sort -n 00:15:39.313 04:30:42 -- host/discovery.sh@63 -- # xargs 00:15:39.313 04:30:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@131 -- # get_notification_count 00:15:39.313 04:30:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:39.313 04:30:42 -- host/discovery.sh@74 -- # jq '. | length' 00:15:39.313 04:30:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.313 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:15:39.313 04:30:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@74 -- # notification_count=0 00:15:39.313 04:30:42 -- host/discovery.sh@75 -- # notify_id=2 00:15:39.313 04:30:42 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:39.313 04:30:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.313 04:30:42 -- common/autotest_common.sh@10 -- # set +x 00:15:39.313 04:30:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.313 04:30:42 -- host/discovery.sh@135 -- # sleep 1 00:15:40.249 04:30:43 -- host/discovery.sh@136 -- # get_subsystem_names 00:15:40.249 04:30:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.249 04:30:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.249 04:30:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.249 04:30:43 -- common/autotest_common.sh@10 -- # set +x 00:15:40.249 04:30:43 -- host/discovery.sh@59 -- # sort 00:15:40.249 04:30:43 -- host/discovery.sh@59 -- # xargs 00:15:40.249 04:30:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.249 04:30:43 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:15:40.249 04:30:43 -- host/discovery.sh@137 -- # get_bdev_list 00:15:40.249 04:30:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.249 04:30:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.249 04:30:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.249 04:30:43 -- host/discovery.sh@55 -- # xargs 00:15:40.249 04:30:43 -- common/autotest_common.sh@10 -- # set +x 00:15:40.249 04:30:43 -- host/discovery.sh@55 -- # sort 00:15:40.249 04:30:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.249 04:30:43 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:15:40.249 04:30:43 -- host/discovery.sh@138 -- # get_notification_count 00:15:40.249 04:30:43 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:40.249 04:30:43 -- host/discovery.sh@74 -- # jq '. | length' 00:15:40.249 04:30:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.249 04:30:43 -- common/autotest_common.sh@10 -- # set +x 00:15:40.508 04:30:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.508 04:30:43 -- host/discovery.sh@74 -- # notification_count=2 00:15:40.508 04:30:43 -- host/discovery.sh@75 -- # notify_id=4 00:15:40.508 04:30:43 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:15:40.508 04:30:43 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:40.508 04:30:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.508 04:30:43 -- common/autotest_common.sh@10 -- # set +x 00:15:41.444 [2024-12-07 04:30:44.543030] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:41.444 [2024-12-07 04:30:44.543240] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:41.444 [2024-12-07 04:30:44.543275] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:41.444 [2024-12-07 04:30:44.549068] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:41.444 [2024-12-07 04:30:44.608325] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:41.444 [2024-12-07 04:30:44.608363] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:41.444 04:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.444 04:30:44 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.445 04:30:44 -- common/autotest_common.sh@650 -- # local es=0 00:15:41.445 04:30:44 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.445 04:30:44 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:41.445 04:30:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.445 04:30:44 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:41.445 04:30:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.445 04:30:44 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.445 04:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.445 04:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 request: 00:15:41.445 { 00:15:41.445 "name": "nvme", 00:15:41.445 "trtype": "tcp", 00:15:41.445 "traddr": "10.0.0.2", 00:15:41.445 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:41.445 "adrfam": "ipv4", 00:15:41.445 "trsvcid": "8009", 00:15:41.445 "wait_for_attach": true, 00:15:41.445 "method": "bdev_nvme_start_discovery", 00:15:41.445 "req_id": 1 00:15:41.445 } 00:15:41.445 Got JSON-RPC error response 00:15:41.445 response: 00:15:41.445 { 00:15:41.445 "code": -17, 00:15:41.445 "message": "File exists" 00:15:41.445 } 00:15:41.445 04:30:44 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:41.445 04:30:44 -- common/autotest_common.sh@653 -- # es=1 00:15:41.445 04:30:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.445 04:30:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.445 04:30:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.445 04:30:44 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:15:41.445 04:30:44 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:41.445 04:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.445 04:30:44 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:41.445 04:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:41.445 04:30:44 -- host/discovery.sh@67 -- # sort 00:15:41.445 04:30:44 -- host/discovery.sh@67 -- # xargs 00:15:41.445 04:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.704 04:30:44 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:15:41.704 04:30:44 -- host/discovery.sh@147 -- # get_bdev_list 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.704 04:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # sort 00:15:41.704 04:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # xargs 00:15:41.704 04:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.704 04:30:44 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:41.704 04:30:44 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.704 04:30:44 -- common/autotest_common.sh@650 -- # local es=0 00:15:41.704 04:30:44 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.704 04:30:44 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:41.704 04:30:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.704 04:30:44 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:41.704 04:30:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.704 04:30:44 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.704 04:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.704 04:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 request: 00:15:41.704 { 00:15:41.704 "name": "nvme_second", 00:15:41.704 "trtype": "tcp", 00:15:41.704 "traddr": "10.0.0.2", 00:15:41.704 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:41.704 "adrfam": "ipv4", 00:15:41.704 "trsvcid": "8009", 00:15:41.704 "wait_for_attach": true, 00:15:41.704 "method": "bdev_nvme_start_discovery", 00:15:41.704 "req_id": 1 00:15:41.704 } 00:15:41.704 Got JSON-RPC error response 00:15:41.704 response: 00:15:41.704 { 00:15:41.704 "code": -17, 00:15:41.704 "message": "File exists" 00:15:41.704 } 00:15:41.704 04:30:44 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:41.704 04:30:44 -- common/autotest_common.sh@653 -- # es=1 00:15:41.704 04:30:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.704 04:30:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.704 04:30:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.704 04:30:44 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:15:41.704 04:30:44 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:41.704 04:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.704 04:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 04:30:44 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:41.704 04:30:44 -- host/discovery.sh@67 -- # sort 00:15:41.704 04:30:44 -- host/discovery.sh@67 -- # xargs 00:15:41.704 04:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.704 04:30:44 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:15:41.704 04:30:44 -- host/discovery.sh@153 -- # get_bdev_list 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # sort 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:41.704 04:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.704 04:30:44 -- host/discovery.sh@55 -- # xargs 00:15:41.704 04:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:41.704 04:30:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.704 04:30:44 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:41.704 04:30:44 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:41.704 04:30:44 -- common/autotest_common.sh@650 -- # local es=0 00:15:41.704 04:30:44 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:41.704 04:30:44 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:41.704 04:30:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.704 04:30:44 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:41.704 04:30:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.704 04:30:44 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:41.704 04:30:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.704 04:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:43.078 [2024-12-07 04:30:45.882407] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:43.078 [2024-12-07 04:30:45.882534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:43.078 [2024-12-07 04:30:45.882575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:43.078 [2024-12-07 04:30:45.882591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ac270 with addr=10.0.0.2, port=8010 00:15:43.078 [2024-12-07 04:30:45.882607] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:43.078 [2024-12-07 04:30:45.882616] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:43.078 [2024-12-07 04:30:45.882625] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:43.644 [2024-12-07 04:30:46.882410] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:43.644 [2024-12-07 04:30:46.882519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:43.644 [2024-12-07 04:30:46.882558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:43.644 [2024-12-07 04:30:46.882573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ac270 with addr=10.0.0.2, port=8010 00:15:43.644 [2024-12-07 04:30:46.882589] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:43.644 [2024-12-07 04:30:46.882597] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:43.644 [2024-12-07 04:30:46.882606] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:45.016 [2024-12-07 04:30:47.882264] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:45.016 request: 00:15:45.016 { 00:15:45.016 "name": "nvme_second", 00:15:45.016 "trtype": "tcp", 00:15:45.016 "traddr": "10.0.0.2", 00:15:45.016 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:45.016 "adrfam": "ipv4", 00:15:45.016 "trsvcid": "8010", 00:15:45.016 "attach_timeout_ms": 3000, 00:15:45.016 "method": "bdev_nvme_start_discovery", 00:15:45.016 "req_id": 1 00:15:45.016 } 00:15:45.016 Got JSON-RPC error response 00:15:45.016 response: 00:15:45.016 { 00:15:45.016 "code": -110, 00:15:45.016 "message": "Connection timed out" 00:15:45.016 } 00:15:45.016 04:30:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:45.016 04:30:47 -- common/autotest_common.sh@653 -- # es=1 00:15:45.016 04:30:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:45.016 04:30:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:45.016 04:30:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:45.016 04:30:47 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:15:45.016 04:30:47 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:45.016 04:30:47 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:45.016 04:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.016 04:30:47 -- host/discovery.sh@67 -- # sort 00:15:45.016 04:30:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.016 04:30:47 -- host/discovery.sh@67 -- # xargs 00:15:45.016 04:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.016 04:30:47 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:15:45.016 04:30:47 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:15:45.016 04:30:47 -- host/discovery.sh@162 -- # kill 70650 00:15:45.016 04:30:47 -- host/discovery.sh@163 -- # nvmftestfini 00:15:45.016 04:30:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.016 04:30:47 -- nvmf/common.sh@116 -- # sync 00:15:45.016 04:30:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:45.016 04:30:47 -- nvmf/common.sh@119 -- # set +e 00:15:45.016 04:30:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:45.016 04:30:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:45.016 rmmod nvme_tcp 00:15:45.016 rmmod nvme_fabrics 00:15:45.016 rmmod nvme_keyring 00:15:45.016 04:30:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:45.016 04:30:48 -- nvmf/common.sh@123 -- # set -e 00:15:45.016 04:30:48 -- nvmf/common.sh@124 -- # return 0 00:15:45.016 04:30:48 -- nvmf/common.sh@477 -- # '[' -n 70616 ']' 00:15:45.016 04:30:48 -- nvmf/common.sh@478 -- # killprocess 70616 00:15:45.016 04:30:48 -- common/autotest_common.sh@936 -- # '[' -z 70616 ']' 00:15:45.016 04:30:48 -- common/autotest_common.sh@940 -- # kill -0 70616 00:15:45.016 04:30:48 -- common/autotest_common.sh@941 -- # uname 00:15:45.016 04:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.016 04:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70616 00:15:45.016 04:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:45.016 04:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:45.016 killing process with pid 70616 00:15:45.016 04:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70616' 00:15:45.016 04:30:48 -- common/autotest_common.sh@955 -- # kill 70616 00:15:45.016 04:30:48 -- common/autotest_common.sh@960 -- # wait 70616 00:15:45.274 04:30:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:45.274 04:30:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:45.274 04:30:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:45.274 04:30:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.274 04:30:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:45.274 04:30:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.274 04:30:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.274 04:30:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.274 04:30:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:45.274 00:15:45.274 real 0m13.914s 00:15:45.274 user 0m26.759s 00:15:45.274 sys 0m2.157s 00:15:45.274 04:30:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:45.274 ************************************ 00:15:45.274 END TEST nvmf_discovery 00:15:45.274 04:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.274 ************************************ 00:15:45.274 04:30:48 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:45.274 04:30:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:45.274 04:30:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:45.274 04:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.274 ************************************ 00:15:45.274 START TEST nvmf_discovery_remove_ifc 00:15:45.274 ************************************ 00:15:45.274 04:30:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:45.274 * Looking for test storage... 00:15:45.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:45.274 04:30:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:45.274 04:30:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:45.274 04:30:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:45.274 04:30:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:45.274 04:30:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:45.274 04:30:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:45.274 04:30:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:45.274 04:30:48 -- scripts/common.sh@335 -- # IFS=.-: 00:15:45.274 04:30:48 -- scripts/common.sh@335 -- # read -ra ver1 00:15:45.274 04:30:48 -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.274 04:30:48 -- scripts/common.sh@336 -- # read -ra ver2 00:15:45.274 04:30:48 -- scripts/common.sh@337 -- # local 'op=<' 00:15:45.274 04:30:48 -- scripts/common.sh@339 -- # ver1_l=2 00:15:45.274 04:30:48 -- scripts/common.sh@340 -- # ver2_l=1 00:15:45.274 04:30:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:45.274 04:30:48 -- scripts/common.sh@343 -- # case "$op" in 00:15:45.274 04:30:48 -- scripts/common.sh@344 -- # : 1 00:15:45.274 04:30:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:45.274 04:30:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.274 04:30:48 -- scripts/common.sh@364 -- # decimal 1 00:15:45.274 04:30:48 -- scripts/common.sh@352 -- # local d=1 00:15:45.274 04:30:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.274 04:30:48 -- scripts/common.sh@354 -- # echo 1 00:15:45.274 04:30:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:45.274 04:30:48 -- scripts/common.sh@365 -- # decimal 2 00:15:45.274 04:30:48 -- scripts/common.sh@352 -- # local d=2 00:15:45.274 04:30:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.274 04:30:48 -- scripts/common.sh@354 -- # echo 2 00:15:45.274 04:30:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:45.274 04:30:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:45.274 04:30:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:45.274 04:30:48 -- scripts/common.sh@367 -- # return 0 00:15:45.274 04:30:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.274 04:30:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:45.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.274 --rc genhtml_branch_coverage=1 00:15:45.274 --rc genhtml_function_coverage=1 00:15:45.274 --rc genhtml_legend=1 00:15:45.274 --rc geninfo_all_blocks=1 00:15:45.274 --rc geninfo_unexecuted_blocks=1 00:15:45.274 00:15:45.274 ' 00:15:45.274 04:30:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:45.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.274 --rc genhtml_branch_coverage=1 00:15:45.274 --rc genhtml_function_coverage=1 00:15:45.274 --rc genhtml_legend=1 00:15:45.274 --rc geninfo_all_blocks=1 00:15:45.274 --rc geninfo_unexecuted_blocks=1 00:15:45.275 00:15:45.275 ' 00:15:45.275 04:30:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.275 --rc genhtml_branch_coverage=1 00:15:45.275 --rc genhtml_function_coverage=1 00:15:45.275 --rc genhtml_legend=1 00:15:45.275 --rc geninfo_all_blocks=1 00:15:45.275 --rc geninfo_unexecuted_blocks=1 00:15:45.275 00:15:45.275 ' 00:15:45.275 04:30:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:45.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.275 --rc genhtml_branch_coverage=1 00:15:45.275 --rc genhtml_function_coverage=1 00:15:45.275 --rc genhtml_legend=1 00:15:45.275 --rc geninfo_all_blocks=1 00:15:45.275 --rc geninfo_unexecuted_blocks=1 00:15:45.275 00:15:45.275 ' 00:15:45.275 04:30:48 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.275 04:30:48 -- nvmf/common.sh@7 -- # uname -s 00:15:45.275 04:30:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.275 04:30:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.275 04:30:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.275 04:30:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.275 04:30:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.275 04:30:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.275 04:30:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.275 04:30:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.275 04:30:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.275 04:30:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.532 04:30:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:15:45.532 04:30:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:15:45.532 04:30:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.532 04:30:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.532 04:30:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.532 04:30:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.532 04:30:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.532 04:30:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.532 04:30:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.532 04:30:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 04:30:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 04:30:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 04:30:48 -- paths/export.sh@5 -- # export PATH 00:15:45.532 04:30:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.532 04:30:48 -- nvmf/common.sh@46 -- # : 0 00:15:45.532 04:30:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:45.532 04:30:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:45.532 04:30:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:45.532 04:30:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.532 04:30:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.532 04:30:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:45.532 04:30:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:45.532 04:30:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:45.532 04:30:48 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:45.532 04:30:48 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:45.532 04:30:48 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:45.532 04:30:48 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:45.532 04:30:48 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:45.532 04:30:48 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:45.532 04:30:48 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:45.532 04:30:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:45.532 04:30:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.532 04:30:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:45.532 04:30:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:45.532 04:30:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:45.532 04:30:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.532 04:30:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.532 04:30:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.532 04:30:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:45.532 04:30:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:45.532 04:30:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:45.532 04:30:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:45.532 04:30:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:45.532 04:30:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:45.532 04:30:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.532 04:30:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.532 04:30:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:45.532 04:30:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:45.532 04:30:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.532 04:30:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.532 04:30:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.532 04:30:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.532 04:30:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.532 04:30:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.532 04:30:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.532 04:30:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.532 04:30:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:45.532 04:30:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:45.532 Cannot find device "nvmf_tgt_br" 00:15:45.532 04:30:48 -- nvmf/common.sh@154 -- # true 00:15:45.532 04:30:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.532 Cannot find device "nvmf_tgt_br2" 00:15:45.532 04:30:48 -- nvmf/common.sh@155 -- # true 00:15:45.532 04:30:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:45.532 04:30:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:45.532 Cannot find device "nvmf_tgt_br" 00:15:45.532 04:30:48 -- nvmf/common.sh@157 -- # true 00:15:45.532 04:30:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:45.532 Cannot find device "nvmf_tgt_br2" 00:15:45.532 04:30:48 -- nvmf/common.sh@158 -- # true 00:15:45.532 04:30:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:45.532 04:30:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:45.532 04:30:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.532 04:30:48 -- nvmf/common.sh@161 -- # true 00:15:45.532 04:30:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.532 04:30:48 -- nvmf/common.sh@162 -- # true 00:15:45.532 04:30:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.532 04:30:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.532 04:30:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.532 04:30:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.532 04:30:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.532 04:30:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.532 04:30:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.532 04:30:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:45.532 04:30:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:45.532 04:30:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:45.532 04:30:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:45.532 04:30:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:45.532 04:30:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:45.532 04:30:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.532 04:30:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.532 04:30:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.789 04:30:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:45.789 04:30:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:45.789 04:30:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.789 04:30:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.789 04:30:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.789 04:30:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.789 04:30:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.789 04:30:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:45.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:15:45.789 00:15:45.789 --- 10.0.0.2 ping statistics --- 00:15:45.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.789 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:45.789 04:30:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:45.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:45.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:15:45.789 00:15:45.789 --- 10.0.0.3 ping statistics --- 00:15:45.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.789 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:45.789 04:30:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:45.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:45.789 00:15:45.789 --- 10.0.0.1 ping statistics --- 00:15:45.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.789 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:45.789 04:30:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.789 04:30:48 -- nvmf/common.sh@421 -- # return 0 00:15:45.789 04:30:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:45.789 04:30:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.789 04:30:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:45.789 04:30:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:45.789 04:30:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.789 04:30:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:45.789 04:30:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:45.789 04:30:48 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:45.789 04:30:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:45.789 04:30:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:45.789 04:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.789 04:30:48 -- nvmf/common.sh@469 -- # nvmfpid=71150 00:15:45.789 04:30:48 -- nvmf/common.sh@470 -- # waitforlisten 71150 00:15:45.790 04:30:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:45.790 04:30:48 -- common/autotest_common.sh@829 -- # '[' -z 71150 ']' 00:15:45.790 04:30:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.790 04:30:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.790 04:30:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.790 04:30:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.790 04:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.790 [2024-12-07 04:30:48.930354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:45.790 [2024-12-07 04:30:48.930444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.046 [2024-12-07 04:30:49.069418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.046 [2024-12-07 04:30:49.136370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:46.046 [2024-12-07 04:30:49.136547] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.046 [2024-12-07 04:30:49.136563] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.046 [2024-12-07 04:30:49.136574] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.046 [2024-12-07 04:30:49.136611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.021 04:30:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.021 04:30:49 -- common/autotest_common.sh@862 -- # return 0 00:15:47.021 04:30:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.021 04:30:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.021 04:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:47.021 04:30:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.021 04:30:49 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:47.021 04:30:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.021 04:30:49 -- common/autotest_common.sh@10 -- # set +x 00:15:47.021 [2024-12-07 04:30:49.957404] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.021 [2024-12-07 04:30:49.965514] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:47.021 null0 00:15:47.021 [2024-12-07 04:30:49.997607] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.021 04:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.021 04:30:50 -- host/discovery_remove_ifc.sh@59 -- # hostpid=71182 00:15:47.021 04:30:50 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:47.021 04:30:50 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 71182 /tmp/host.sock 00:15:47.021 04:30:50 -- common/autotest_common.sh@829 -- # '[' -z 71182 ']' 00:15:47.021 04:30:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:47.021 04:30:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.021 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:47.021 04:30:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:47.021 04:30:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.021 04:30:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.021 [2024-12-07 04:30:50.075492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:47.021 [2024-12-07 04:30:50.075598] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71182 ] 00:15:47.021 [2024-12-07 04:30:50.213382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.280 [2024-12-07 04:30:50.264824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:47.280 [2024-12-07 04:30:50.265215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.280 04:30:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.280 04:30:50 -- common/autotest_common.sh@862 -- # return 0 00:15:47.280 04:30:50 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.280 04:30:50 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:47.280 04:30:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.280 04:30:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.280 04:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.280 04:30:50 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:47.280 04:30:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.280 04:30:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.280 04:30:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.280 04:30:50 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:47.280 04:30:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.280 04:30:50 -- common/autotest_common.sh@10 -- # set +x 00:15:48.218 [2024-12-07 04:30:51.370902] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:48.218 [2024-12-07 04:30:51.370978] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:48.218 [2024-12-07 04:30:51.370997] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:48.218 [2024-12-07 04:30:51.376939] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:48.218 [2024-12-07 04:30:51.432518] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:48.218 [2024-12-07 04:30:51.432582] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:48.218 [2024-12-07 04:30:51.432606] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:48.218 [2024-12-07 04:30:51.432621] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:48.218 [2024-12-07 04:30:51.432642] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:48.218 04:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.218 04:30:51 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:48.218 04:30:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:48.218 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.218 04:30:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.218 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:15:48.218 [2024-12-07 04:30:51.439668] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x115dbe0 was disconnected and freed. delete nvme_qpair. 00:15:48.218 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:48.218 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:48.218 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:48.478 04:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:48.478 04:30:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:48.478 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:15:48.478 04:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:48.478 04:30:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:49.413 04:30:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:49.413 04:30:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.413 04:30:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:49.413 04:30:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:49.413 04:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.413 04:30:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:49.413 04:30:52 -- common/autotest_common.sh@10 -- # set +x 00:15:49.413 04:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.413 04:30:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:49.413 04:30:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:50.792 04:30:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.792 04:30:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.792 04:30:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.792 04:30:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.792 04:30:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.792 04:30:53 -- common/autotest_common.sh@10 -- # set +x 00:15:50.792 04:30:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.792 04:30:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.792 04:30:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:50.792 04:30:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:51.729 04:30:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:51.729 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:51.729 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.729 04:30:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.729 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:51.729 04:30:54 -- common/autotest_common.sh@10 -- # set +x 00:15:51.729 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:51.729 04:30:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.729 04:30:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:51.729 04:30:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:52.666 04:30:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.666 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.666 04:30:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.666 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.666 04:30:55 -- common/autotest_common.sh@10 -- # set +x 00:15:52.666 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.666 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.666 04:30:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.666 04:30:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:52.666 04:30:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:53.603 04:30:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.603 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.603 04:30:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.603 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.603 04:30:56 -- common/autotest_common.sh@10 -- # set +x 00:15:53.603 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.603 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.603 04:30:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.863 04:30:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:53.863 04:30:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:53.863 [2024-12-07 04:30:56.860765] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:53.863 [2024-12-07 04:30:56.860823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.863 [2024-12-07 04:30:56.860839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.863 [2024-12-07 04:30:56.860851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.863 [2024-12-07 04:30:56.860861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.863 [2024-12-07 04:30:56.860870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.863 [2024-12-07 04:30:56.860879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.863 [2024-12-07 04:30:56.860889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.863 [2024-12-07 04:30:56.860898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.863 [2024-12-07 04:30:56.860925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.863 [2024-12-07 04:30:56.860934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.863 [2024-12-07 04:30:56.860943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d2de0 is same with the state(5) to be set 00:15:53.863 [2024-12-07 04:30:56.870765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d2de0 (9): Bad file descriptor 00:15:53.863 [2024-12-07 04:30:56.880781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:54.799 04:30:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:54.799 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.799 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:54.799 04:30:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.799 04:30:57 -- common/autotest_common.sh@10 -- # set +x 00:15:54.799 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:54.799 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:54.799 [2024-12-07 04:30:57.941767] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:55.736 [2024-12-07 04:30:58.965791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:57.115 [2024-12-07 04:30:59.989771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:57.115 [2024-12-07 04:30:59.989932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d2de0 with addr=10.0.0.2, port=4420 00:15:57.115 [2024-12-07 04:30:59.989969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d2de0 is same with the state(5) to be set 00:15:57.116 [2024-12-07 04:30:59.990029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:57.116 [2024-12-07 04:30:59.990052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:57.116 [2024-12-07 04:30:59.990072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:57.116 [2024-12-07 04:30:59.990093] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:57.116 [2024-12-07 04:30:59.990917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d2de0 (9): Bad file descriptor 00:15:57.116 [2024-12-07 04:30:59.991019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:57.116 [2024-12-07 04:30:59.991072] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:57.116 [2024-12-07 04:30:59.991139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.116 [2024-12-07 04:30:59.991169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.116 [2024-12-07 04:30:59.991195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.116 [2024-12-07 04:30:59.991225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.116 [2024-12-07 04:30:59.991256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.116 [2024-12-07 04:30:59.991277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.116 [2024-12-07 04:30:59.991299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.116 [2024-12-07 04:30:59.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.116 [2024-12-07 04:30:59.991342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.116 [2024-12-07 04:30:59.991394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.116 [2024-12-07 04:30:59.991429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:57.116 [2024-12-07 04:30:59.991461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d31f0 (9): Bad file descriptor 00:15:57.116 [2024-12-07 04:30:59.992050] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:57.116 [2024-12-07 04:30:59.992098] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:57.116 04:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.116 04:31:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:57.116 04:31:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.054 04:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.054 04:31:01 -- common/autotest_common.sh@10 -- # set +x 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.054 04:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.054 04:31:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.054 04:31:01 -- common/autotest_common.sh@10 -- # set +x 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.054 04:31:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:58.054 04:31:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:58.990 [2024-12-07 04:31:01.996606] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:58.991 [2024-12-07 04:31:01.996663] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:58.991 [2024-12-07 04:31:01.996697] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:58.991 [2024-12-07 04:31:02.002639] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:58.991 [2024-12-07 04:31:02.057450] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:58.991 [2024-12-07 04:31:02.057512] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:58.991 [2024-12-07 04:31:02.057533] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:58.991 [2024-12-07 04:31:02.057548] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:58.991 [2024-12-07 04:31:02.057557] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:58.991 [2024-12-07 04:31:02.064852] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1114ce0 was disconnected and freed. delete nvme_qpair. 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.991 04:31:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.991 04:31:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.991 04:31:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:58.991 04:31:02 -- host/discovery_remove_ifc.sh@90 -- # killprocess 71182 00:15:58.991 04:31:02 -- common/autotest_common.sh@936 -- # '[' -z 71182 ']' 00:15:58.991 04:31:02 -- common/autotest_common.sh@940 -- # kill -0 71182 00:15:58.991 04:31:02 -- common/autotest_common.sh@941 -- # uname 00:15:58.991 04:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.991 04:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71182 00:15:59.250 killing process with pid 71182 00:15:59.250 04:31:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:59.250 04:31:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:59.250 04:31:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71182' 00:15:59.250 04:31:02 -- common/autotest_common.sh@955 -- # kill 71182 00:15:59.250 04:31:02 -- common/autotest_common.sh@960 -- # wait 71182 00:15:59.250 04:31:02 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:59.250 04:31:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:59.250 04:31:02 -- nvmf/common.sh@116 -- # sync 00:15:59.250 04:31:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:59.250 04:31:02 -- nvmf/common.sh@119 -- # set +e 00:15:59.250 04:31:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:59.250 04:31:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:59.250 rmmod nvme_tcp 00:15:59.250 rmmod nvme_fabrics 00:15:59.515 rmmod nvme_keyring 00:15:59.515 04:31:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:59.515 04:31:02 -- nvmf/common.sh@123 -- # set -e 00:15:59.515 04:31:02 -- nvmf/common.sh@124 -- # return 0 00:15:59.515 04:31:02 -- nvmf/common.sh@477 -- # '[' -n 71150 ']' 00:15:59.515 04:31:02 -- nvmf/common.sh@478 -- # killprocess 71150 00:15:59.515 04:31:02 -- common/autotest_common.sh@936 -- # '[' -z 71150 ']' 00:15:59.515 04:31:02 -- common/autotest_common.sh@940 -- # kill -0 71150 00:15:59.515 04:31:02 -- common/autotest_common.sh@941 -- # uname 00:15:59.515 04:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.515 04:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71150 00:15:59.515 killing process with pid 71150 00:15:59.515 04:31:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:59.515 04:31:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:59.515 04:31:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71150' 00:15:59.515 04:31:02 -- common/autotest_common.sh@955 -- # kill 71150 00:15:59.515 04:31:02 -- common/autotest_common.sh@960 -- # wait 71150 00:15:59.515 04:31:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:59.515 04:31:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:59.515 04:31:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:59.515 04:31:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.515 04:31:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:59.515 04:31:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.515 04:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.515 04:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.785 04:31:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:59.785 00:15:59.785 real 0m14.421s 00:15:59.785 user 0m22.792s 00:15:59.785 sys 0m2.313s 00:15:59.785 04:31:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:59.786 04:31:02 -- common/autotest_common.sh@10 -- # set +x 00:15:59.786 ************************************ 00:15:59.786 END TEST nvmf_discovery_remove_ifc 00:15:59.786 ************************************ 00:15:59.786 04:31:02 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:15:59.786 04:31:02 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:59.786 04:31:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:59.786 04:31:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.786 04:31:02 -- common/autotest_common.sh@10 -- # set +x 00:15:59.786 ************************************ 00:15:59.786 START TEST nvmf_digest 00:15:59.786 ************************************ 00:15:59.786 04:31:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:59.786 * Looking for test storage... 00:15:59.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:59.786 04:31:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:59.786 04:31:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:59.786 04:31:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:59.786 04:31:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:59.786 04:31:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:59.786 04:31:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:59.786 04:31:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:59.786 04:31:02 -- scripts/common.sh@335 -- # IFS=.-: 00:15:59.786 04:31:02 -- scripts/common.sh@335 -- # read -ra ver1 00:15:59.786 04:31:02 -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.786 04:31:02 -- scripts/common.sh@336 -- # read -ra ver2 00:15:59.786 04:31:02 -- scripts/common.sh@337 -- # local 'op=<' 00:15:59.786 04:31:02 -- scripts/common.sh@339 -- # ver1_l=2 00:15:59.786 04:31:02 -- scripts/common.sh@340 -- # ver2_l=1 00:15:59.786 04:31:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:59.786 04:31:02 -- scripts/common.sh@343 -- # case "$op" in 00:15:59.786 04:31:02 -- scripts/common.sh@344 -- # : 1 00:15:59.786 04:31:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:59.786 04:31:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.786 04:31:02 -- scripts/common.sh@364 -- # decimal 1 00:15:59.786 04:31:02 -- scripts/common.sh@352 -- # local d=1 00:15:59.786 04:31:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.786 04:31:02 -- scripts/common.sh@354 -- # echo 1 00:15:59.786 04:31:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:59.786 04:31:02 -- scripts/common.sh@365 -- # decimal 2 00:15:59.786 04:31:02 -- scripts/common.sh@352 -- # local d=2 00:15:59.786 04:31:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.786 04:31:02 -- scripts/common.sh@354 -- # echo 2 00:15:59.786 04:31:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:59.786 04:31:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:59.786 04:31:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:59.786 04:31:02 -- scripts/common.sh@367 -- # return 0 00:15:59.786 04:31:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.786 04:31:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:59.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.786 --rc genhtml_branch_coverage=1 00:15:59.786 --rc genhtml_function_coverage=1 00:15:59.786 --rc genhtml_legend=1 00:15:59.786 --rc geninfo_all_blocks=1 00:15:59.786 --rc geninfo_unexecuted_blocks=1 00:15:59.786 00:15:59.786 ' 00:15:59.786 04:31:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:59.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.786 --rc genhtml_branch_coverage=1 00:15:59.786 --rc genhtml_function_coverage=1 00:15:59.786 --rc genhtml_legend=1 00:15:59.786 --rc geninfo_all_blocks=1 00:15:59.786 --rc geninfo_unexecuted_blocks=1 00:15:59.786 00:15:59.786 ' 00:15:59.786 04:31:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:59.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.786 --rc genhtml_branch_coverage=1 00:15:59.786 --rc genhtml_function_coverage=1 00:15:59.786 --rc genhtml_legend=1 00:15:59.786 --rc geninfo_all_blocks=1 00:15:59.786 --rc geninfo_unexecuted_blocks=1 00:15:59.786 00:15:59.786 ' 00:15:59.786 04:31:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:59.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.786 --rc genhtml_branch_coverage=1 00:15:59.786 --rc genhtml_function_coverage=1 00:15:59.786 --rc genhtml_legend=1 00:15:59.786 --rc geninfo_all_blocks=1 00:15:59.786 --rc geninfo_unexecuted_blocks=1 00:15:59.786 00:15:59.786 ' 00:15:59.786 04:31:02 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.786 04:31:02 -- nvmf/common.sh@7 -- # uname -s 00:15:59.786 04:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.786 04:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.786 04:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.786 04:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.786 04:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.786 04:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.786 04:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.786 04:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.786 04:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.786 04:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.786 04:31:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:15:59.786 04:31:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:15:59.786 04:31:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.786 04:31:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.786 04:31:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.786 04:31:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.786 04:31:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.786 04:31:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.786 04:31:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.786 04:31:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.786 04:31:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.786 04:31:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.786 04:31:03 -- paths/export.sh@5 -- # export PATH 00:15:59.786 04:31:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.786 04:31:03 -- nvmf/common.sh@46 -- # : 0 00:15:59.786 04:31:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:59.786 04:31:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:59.786 04:31:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:59.786 04:31:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.786 04:31:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.786 04:31:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:59.786 04:31:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:59.786 04:31:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:59.786 04:31:03 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:59.786 04:31:03 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:59.786 04:31:03 -- host/digest.sh@16 -- # runtime=2 00:15:59.786 04:31:03 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:15:59.786 04:31:03 -- host/digest.sh@132 -- # nvmftestinit 00:15:59.786 04:31:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:59.786 04:31:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.787 04:31:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:59.787 04:31:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:59.787 04:31:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:59.787 04:31:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.787 04:31:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.787 04:31:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.045 04:31:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:00.045 04:31:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:00.045 04:31:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:00.045 04:31:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:00.045 04:31:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:00.045 04:31:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:00.045 04:31:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.045 04:31:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.045 04:31:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:00.045 04:31:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:00.045 04:31:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.045 04:31:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.045 04:31:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.045 04:31:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.045 04:31:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.045 04:31:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.045 04:31:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.045 04:31:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.045 04:31:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:00.045 04:31:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:00.045 Cannot find device "nvmf_tgt_br" 00:16:00.045 04:31:03 -- nvmf/common.sh@154 -- # true 00:16:00.045 04:31:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.045 Cannot find device "nvmf_tgt_br2" 00:16:00.045 04:31:03 -- nvmf/common.sh@155 -- # true 00:16:00.045 04:31:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:00.045 04:31:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:00.045 Cannot find device "nvmf_tgt_br" 00:16:00.045 04:31:03 -- nvmf/common.sh@157 -- # true 00:16:00.045 04:31:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:00.045 Cannot find device "nvmf_tgt_br2" 00:16:00.045 04:31:03 -- nvmf/common.sh@158 -- # true 00:16:00.045 04:31:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:00.045 04:31:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:00.045 04:31:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.045 04:31:03 -- nvmf/common.sh@161 -- # true 00:16:00.045 04:31:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.045 04:31:03 -- nvmf/common.sh@162 -- # true 00:16:00.045 04:31:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.045 04:31:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.045 04:31:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.045 04:31:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.045 04:31:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.045 04:31:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.045 04:31:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.045 04:31:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:00.045 04:31:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:00.045 04:31:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:00.045 04:31:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:00.045 04:31:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:00.045 04:31:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:00.045 04:31:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.045 04:31:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.046 04:31:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.046 04:31:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:00.305 04:31:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:00.305 04:31:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.305 04:31:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:00.305 04:31:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:00.305 04:31:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:00.305 04:31:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:00.305 04:31:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:00.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:00.305 00:16:00.305 --- 10.0.0.2 ping statistics --- 00:16:00.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.305 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:00.305 04:31:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:00.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:00.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:00.305 00:16:00.305 --- 10.0.0.3 ping statistics --- 00:16:00.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.305 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:00.305 04:31:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:00.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:00.305 00:16:00.305 --- 10.0.0.1 ping statistics --- 00:16:00.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.305 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:00.305 04:31:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.305 04:31:03 -- nvmf/common.sh@421 -- # return 0 00:16:00.305 04:31:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:00.305 04:31:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.305 04:31:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:00.305 04:31:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:00.305 04:31:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.305 04:31:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:00.305 04:31:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:00.305 04:31:03 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:00.305 04:31:03 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:00.305 04:31:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:00.305 04:31:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:00.305 04:31:03 -- common/autotest_common.sh@10 -- # set +x 00:16:00.305 ************************************ 00:16:00.305 START TEST nvmf_digest_clean 00:16:00.305 ************************************ 00:16:00.305 04:31:03 -- common/autotest_common.sh@1114 -- # run_digest 00:16:00.305 04:31:03 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:00.305 04:31:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:00.305 04:31:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.305 04:31:03 -- common/autotest_common.sh@10 -- # set +x 00:16:00.305 04:31:03 -- nvmf/common.sh@469 -- # nvmfpid=71590 00:16:00.305 04:31:03 -- nvmf/common.sh@470 -- # waitforlisten 71590 00:16:00.305 04:31:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:00.305 04:31:03 -- common/autotest_common.sh@829 -- # '[' -z 71590 ']' 00:16:00.305 04:31:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.305 04:31:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.305 04:31:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.305 04:31:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.305 04:31:03 -- common/autotest_common.sh@10 -- # set +x 00:16:00.305 [2024-12-07 04:31:03.422849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:00.305 [2024-12-07 04:31:03.422953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.564 [2024-12-07 04:31:03.555327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.564 [2024-12-07 04:31:03.607481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.564 [2024-12-07 04:31:03.607642] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.564 [2024-12-07 04:31:03.607667] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.564 [2024-12-07 04:31:03.607677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.564 [2024-12-07 04:31:03.607706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.503 04:31:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.503 04:31:04 -- common/autotest_common.sh@862 -- # return 0 00:16:01.503 04:31:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:01.503 04:31:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.503 04:31:04 -- common/autotest_common.sh@10 -- # set +x 00:16:01.503 04:31:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.503 04:31:04 -- host/digest.sh@120 -- # common_target_config 00:16:01.503 04:31:04 -- host/digest.sh@43 -- # rpc_cmd 00:16:01.503 04:31:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.503 04:31:04 -- common/autotest_common.sh@10 -- # set +x 00:16:01.503 null0 00:16:01.503 [2024-12-07 04:31:04.481327] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.503 [2024-12-07 04:31:04.505490] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.503 04:31:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.503 04:31:04 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:01.503 04:31:04 -- host/digest.sh@77 -- # local rw bs qd 00:16:01.503 04:31:04 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:01.503 04:31:04 -- host/digest.sh@80 -- # rw=randread 00:16:01.503 04:31:04 -- host/digest.sh@80 -- # bs=4096 00:16:01.503 04:31:04 -- host/digest.sh@80 -- # qd=128 00:16:01.503 04:31:04 -- host/digest.sh@82 -- # bperfpid=71622 00:16:01.503 04:31:04 -- host/digest.sh@83 -- # waitforlisten 71622 /var/tmp/bperf.sock 00:16:01.503 04:31:04 -- common/autotest_common.sh@829 -- # '[' -z 71622 ']' 00:16:01.503 04:31:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:01.503 04:31:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.503 04:31:04 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:01.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:01.503 04:31:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:01.503 04:31:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.503 04:31:04 -- common/autotest_common.sh@10 -- # set +x 00:16:01.503 [2024-12-07 04:31:04.565165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:01.503 [2024-12-07 04:31:04.565280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71622 ] 00:16:01.503 [2024-12-07 04:31:04.702365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.762 [2024-12-07 04:31:04.775175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.762 04:31:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.762 04:31:04 -- common/autotest_common.sh@862 -- # return 0 00:16:01.762 04:31:04 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:01.762 04:31:04 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:01.762 04:31:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:02.021 04:31:05 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:02.021 04:31:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:02.280 nvme0n1 00:16:02.280 04:31:05 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:02.280 04:31:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:02.538 Running I/O for 2 seconds... 00:16:04.440 00:16:04.440 Latency(us) 00:16:04.440 [2024-12-07T04:31:07.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.440 [2024-12-07T04:31:07.680Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:04.440 nvme0n1 : 2.01 16321.70 63.76 0.00 0.00 7837.39 6851.49 21805.61 00:16:04.440 [2024-12-07T04:31:07.680Z] =================================================================================================================== 00:16:04.440 [2024-12-07T04:31:07.680Z] Total : 16321.70 63.76 0.00 0.00 7837.39 6851.49 21805.61 00:16:04.440 0 00:16:04.440 04:31:07 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:04.440 04:31:07 -- host/digest.sh@92 -- # get_accel_stats 00:16:04.440 04:31:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:04.440 04:31:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:04.440 | select(.opcode=="crc32c") 00:16:04.440 | "\(.module_name) \(.executed)"' 00:16:04.440 04:31:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:04.698 04:31:07 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:04.698 04:31:07 -- host/digest.sh@93 -- # exp_module=software 00:16:04.698 04:31:07 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:04.698 04:31:07 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:04.698 04:31:07 -- host/digest.sh@97 -- # killprocess 71622 00:16:04.698 04:31:07 -- common/autotest_common.sh@936 -- # '[' -z 71622 ']' 00:16:04.698 04:31:07 -- common/autotest_common.sh@940 -- # kill -0 71622 00:16:04.698 04:31:07 -- common/autotest_common.sh@941 -- # uname 00:16:04.698 04:31:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.698 04:31:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71622 00:16:04.698 04:31:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:04.698 killing process with pid 71622 00:16:04.698 04:31:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:04.698 04:31:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71622' 00:16:04.698 Received shutdown signal, test time was about 2.000000 seconds 00:16:04.698 00:16:04.698 Latency(us) 00:16:04.698 [2024-12-07T04:31:07.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.698 [2024-12-07T04:31:07.938Z] =================================================================================================================== 00:16:04.698 [2024-12-07T04:31:07.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.698 04:31:07 -- common/autotest_common.sh@955 -- # kill 71622 00:16:04.698 04:31:07 -- common/autotest_common.sh@960 -- # wait 71622 00:16:04.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:04.957 04:31:08 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:04.957 04:31:08 -- host/digest.sh@77 -- # local rw bs qd 00:16:04.957 04:31:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:04.957 04:31:08 -- host/digest.sh@80 -- # rw=randread 00:16:04.957 04:31:08 -- host/digest.sh@80 -- # bs=131072 00:16:04.957 04:31:08 -- host/digest.sh@80 -- # qd=16 00:16:04.957 04:31:08 -- host/digest.sh@82 -- # bperfpid=71679 00:16:04.957 04:31:08 -- host/digest.sh@83 -- # waitforlisten 71679 /var/tmp/bperf.sock 00:16:04.957 04:31:08 -- common/autotest_common.sh@829 -- # '[' -z 71679 ']' 00:16:04.957 04:31:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:04.957 04:31:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.957 04:31:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:04.957 04:31:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.957 04:31:08 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:04.957 04:31:08 -- common/autotest_common.sh@10 -- # set +x 00:16:04.957 [2024-12-07 04:31:08.173969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:04.957 [2024-12-07 04:31:08.174526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71679 ] 00:16:04.957 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:04.957 Zero copy mechanism will not be used. 00:16:05.215 [2024-12-07 04:31:08.311967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.215 [2024-12-07 04:31:08.369535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.473 04:31:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.473 04:31:08 -- common/autotest_common.sh@862 -- # return 0 00:16:05.473 04:31:08 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:05.473 04:31:08 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:05.473 04:31:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:05.732 04:31:08 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:05.732 04:31:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:05.990 nvme0n1 00:16:05.990 04:31:09 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:05.990 04:31:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:06.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:06.248 Zero copy mechanism will not be used. 00:16:06.248 Running I/O for 2 seconds... 00:16:08.147 00:16:08.147 Latency(us) 00:16:08.147 [2024-12-07T04:31:11.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.147 [2024-12-07T04:31:11.387Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:08.147 nvme0n1 : 2.00 8464.43 1058.05 0.00 0.00 1887.63 1653.29 8936.73 00:16:08.147 [2024-12-07T04:31:11.387Z] =================================================================================================================== 00:16:08.147 [2024-12-07T04:31:11.387Z] Total : 8464.43 1058.05 0.00 0.00 1887.63 1653.29 8936.73 00:16:08.147 0 00:16:08.147 04:31:11 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:08.147 04:31:11 -- host/digest.sh@92 -- # get_accel_stats 00:16:08.147 04:31:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:08.147 04:31:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:08.147 | select(.opcode=="crc32c") 00:16:08.147 | "\(.module_name) \(.executed)"' 00:16:08.147 04:31:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:08.405 04:31:11 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:08.405 04:31:11 -- host/digest.sh@93 -- # exp_module=software 00:16:08.405 04:31:11 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:08.405 04:31:11 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:08.405 04:31:11 -- host/digest.sh@97 -- # killprocess 71679 00:16:08.405 04:31:11 -- common/autotest_common.sh@936 -- # '[' -z 71679 ']' 00:16:08.405 04:31:11 -- common/autotest_common.sh@940 -- # kill -0 71679 00:16:08.405 04:31:11 -- common/autotest_common.sh@941 -- # uname 00:16:08.405 04:31:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:08.405 04:31:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71679 00:16:08.405 04:31:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:08.405 04:31:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:08.405 killing process with pid 71679 00:16:08.405 04:31:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71679' 00:16:08.405 04:31:11 -- common/autotest_common.sh@955 -- # kill 71679 00:16:08.405 Received shutdown signal, test time was about 2.000000 seconds 00:16:08.405 00:16:08.405 Latency(us) 00:16:08.405 [2024-12-07T04:31:11.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.405 [2024-12-07T04:31:11.645Z] =================================================================================================================== 00:16:08.405 [2024-12-07T04:31:11.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:08.405 04:31:11 -- common/autotest_common.sh@960 -- # wait 71679 00:16:08.663 04:31:11 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:08.663 04:31:11 -- host/digest.sh@77 -- # local rw bs qd 00:16:08.663 04:31:11 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:08.663 04:31:11 -- host/digest.sh@80 -- # rw=randwrite 00:16:08.663 04:31:11 -- host/digest.sh@80 -- # bs=4096 00:16:08.663 04:31:11 -- host/digest.sh@80 -- # qd=128 00:16:08.663 04:31:11 -- host/digest.sh@82 -- # bperfpid=71727 00:16:08.663 04:31:11 -- host/digest.sh@83 -- # waitforlisten 71727 /var/tmp/bperf.sock 00:16:08.663 04:31:11 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:08.664 04:31:11 -- common/autotest_common.sh@829 -- # '[' -z 71727 ']' 00:16:08.664 04:31:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:08.664 04:31:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:08.664 04:31:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:08.664 04:31:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.664 04:31:11 -- common/autotest_common.sh@10 -- # set +x 00:16:08.664 [2024-12-07 04:31:11.811710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:08.664 [2024-12-07 04:31:11.812376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71727 ] 00:16:08.922 [2024-12-07 04:31:11.944388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.922 [2024-12-07 04:31:11.999434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.488 04:31:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.488 04:31:12 -- common/autotest_common.sh@862 -- # return 0 00:16:09.488 04:31:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:09.488 04:31:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:09.488 04:31:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:10.057 04:31:12 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:10.057 04:31:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:10.057 nvme0n1 00:16:10.057 04:31:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:10.057 04:31:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:10.316 Running I/O for 2 seconds... 00:16:12.223 00:16:12.223 Latency(us) 00:16:12.223 [2024-12-07T04:31:15.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.223 [2024-12-07T04:31:15.463Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:12.223 nvme0n1 : 2.01 17538.72 68.51 0.00 0.00 7292.22 6345.08 15490.33 00:16:12.223 [2024-12-07T04:31:15.463Z] =================================================================================================================== 00:16:12.223 [2024-12-07T04:31:15.463Z] Total : 17538.72 68.51 0.00 0.00 7292.22 6345.08 15490.33 00:16:12.223 0 00:16:12.223 04:31:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:12.223 04:31:15 -- host/digest.sh@92 -- # get_accel_stats 00:16:12.223 04:31:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:12.223 | select(.opcode=="crc32c") 00:16:12.223 | "\(.module_name) \(.executed)"' 00:16:12.223 04:31:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:12.223 04:31:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:12.481 04:31:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:12.481 04:31:15 -- host/digest.sh@93 -- # exp_module=software 00:16:12.481 04:31:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:12.481 04:31:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:12.481 04:31:15 -- host/digest.sh@97 -- # killprocess 71727 00:16:12.481 04:31:15 -- common/autotest_common.sh@936 -- # '[' -z 71727 ']' 00:16:12.481 04:31:15 -- common/autotest_common.sh@940 -- # kill -0 71727 00:16:12.481 04:31:15 -- common/autotest_common.sh@941 -- # uname 00:16:12.481 04:31:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.481 04:31:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71727 00:16:12.739 04:31:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:12.739 04:31:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:12.739 killing process with pid 71727 00:16:12.739 04:31:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71727' 00:16:12.739 Received shutdown signal, test time was about 2.000000 seconds 00:16:12.739 00:16:12.739 Latency(us) 00:16:12.739 [2024-12-07T04:31:15.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.739 [2024-12-07T04:31:15.979Z] =================================================================================================================== 00:16:12.739 [2024-12-07T04:31:15.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.739 04:31:15 -- common/autotest_common.sh@955 -- # kill 71727 00:16:12.739 04:31:15 -- common/autotest_common.sh@960 -- # wait 71727 00:16:12.739 04:31:15 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:12.739 04:31:15 -- host/digest.sh@77 -- # local rw bs qd 00:16:12.739 04:31:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:12.739 04:31:15 -- host/digest.sh@80 -- # rw=randwrite 00:16:12.739 04:31:15 -- host/digest.sh@80 -- # bs=131072 00:16:12.739 04:31:15 -- host/digest.sh@80 -- # qd=16 00:16:12.739 04:31:15 -- host/digest.sh@82 -- # bperfpid=71783 00:16:12.739 04:31:15 -- host/digest.sh@83 -- # waitforlisten 71783 /var/tmp/bperf.sock 00:16:12.739 04:31:15 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:12.739 04:31:15 -- common/autotest_common.sh@829 -- # '[' -z 71783 ']' 00:16:12.739 04:31:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:12.739 04:31:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:12.739 04:31:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:12.739 04:31:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.739 04:31:15 -- common/autotest_common.sh@10 -- # set +x 00:16:12.997 [2024-12-07 04:31:15.991937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:12.997 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:12.997 Zero copy mechanism will not be used. 00:16:12.997 [2024-12-07 04:31:15.992846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71783 ] 00:16:12.997 [2024-12-07 04:31:16.129955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.997 [2024-12-07 04:31:16.184971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.997 04:31:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.997 04:31:16 -- common/autotest_common.sh@862 -- # return 0 00:16:12.997 04:31:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:12.997 04:31:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:12.997 04:31:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:13.256 04:31:16 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:13.256 04:31:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:13.823 nvme0n1 00:16:13.824 04:31:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:13.824 04:31:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:13.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:13.824 Zero copy mechanism will not be used. 00:16:13.824 Running I/O for 2 seconds... 00:16:15.728 00:16:15.728 Latency(us) 00:16:15.728 [2024-12-07T04:31:18.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.728 [2024-12-07T04:31:18.968Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:15.728 nvme0n1 : 2.00 6762.22 845.28 0.00 0.00 2360.96 1824.58 11498.59 00:16:15.728 [2024-12-07T04:31:18.969Z] =================================================================================================================== 00:16:15.729 [2024-12-07T04:31:18.969Z] Total : 6762.22 845.28 0.00 0.00 2360.96 1824.58 11498.59 00:16:15.729 0 00:16:15.729 04:31:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:15.729 04:31:18 -- host/digest.sh@92 -- # get_accel_stats 00:16:15.729 04:31:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:15.729 04:31:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:15.729 | select(.opcode=="crc32c") 00:16:15.729 | "\(.module_name) \(.executed)"' 00:16:15.729 04:31:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:15.988 04:31:19 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:15.988 04:31:19 -- host/digest.sh@93 -- # exp_module=software 00:16:15.988 04:31:19 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:15.988 04:31:19 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:15.988 04:31:19 -- host/digest.sh@97 -- # killprocess 71783 00:16:15.988 04:31:19 -- common/autotest_common.sh@936 -- # '[' -z 71783 ']' 00:16:15.988 04:31:19 -- common/autotest_common.sh@940 -- # kill -0 71783 00:16:15.988 04:31:19 -- common/autotest_common.sh@941 -- # uname 00:16:15.988 04:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:15.988 04:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71783 00:16:15.988 killing process with pid 71783 00:16:15.988 Received shutdown signal, test time was about 2.000000 seconds 00:16:15.988 00:16:15.988 Latency(us) 00:16:15.988 [2024-12-07T04:31:19.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.988 [2024-12-07T04:31:19.228Z] =================================================================================================================== 00:16:15.988 [2024-12-07T04:31:19.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:15.988 04:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:15.988 04:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:15.988 04:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71783' 00:16:15.988 04:31:19 -- common/autotest_common.sh@955 -- # kill 71783 00:16:15.988 04:31:19 -- common/autotest_common.sh@960 -- # wait 71783 00:16:16.247 04:31:19 -- host/digest.sh@126 -- # killprocess 71590 00:16:16.247 04:31:19 -- common/autotest_common.sh@936 -- # '[' -z 71590 ']' 00:16:16.247 04:31:19 -- common/autotest_common.sh@940 -- # kill -0 71590 00:16:16.247 04:31:19 -- common/autotest_common.sh@941 -- # uname 00:16:16.247 04:31:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.247 04:31:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71590 00:16:16.247 killing process with pid 71590 00:16:16.247 04:31:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.247 04:31:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.247 04:31:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71590' 00:16:16.247 04:31:19 -- common/autotest_common.sh@955 -- # kill 71590 00:16:16.247 04:31:19 -- common/autotest_common.sh@960 -- # wait 71590 00:16:16.507 00:16:16.507 real 0m16.191s 00:16:16.507 user 0m31.186s 00:16:16.507 sys 0m4.294s 00:16:16.507 04:31:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:16.507 04:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.507 ************************************ 00:16:16.507 END TEST nvmf_digest_clean 00:16:16.507 ************************************ 00:16:16.507 04:31:19 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:16.507 04:31:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:16.507 04:31:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.507 04:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.507 ************************************ 00:16:16.507 START TEST nvmf_digest_error 00:16:16.507 ************************************ 00:16:16.507 04:31:19 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:16.507 04:31:19 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:16.507 04:31:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.507 04:31:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.507 04:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.507 04:31:19 -- nvmf/common.sh@469 -- # nvmfpid=71863 00:16:16.507 04:31:19 -- nvmf/common.sh@470 -- # waitforlisten 71863 00:16:16.507 04:31:19 -- common/autotest_common.sh@829 -- # '[' -z 71863 ']' 00:16:16.507 04:31:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:16.507 04:31:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.507 04:31:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.507 04:31:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.507 04:31:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.507 04:31:19 -- common/autotest_common.sh@10 -- # set +x 00:16:16.507 [2024-12-07 04:31:19.684975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:16.507 [2024-12-07 04:31:19.685094] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.766 [2024-12-07 04:31:19.825031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.766 [2024-12-07 04:31:19.880210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.766 [2024-12-07 04:31:19.880375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.766 [2024-12-07 04:31:19.880387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.766 [2024-12-07 04:31:19.880396] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.766 [2024-12-07 04:31:19.880420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.704 04:31:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.704 04:31:20 -- common/autotest_common.sh@862 -- # return 0 00:16:17.704 04:31:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.704 04:31:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.704 04:31:20 -- common/autotest_common.sh@10 -- # set +x 00:16:17.704 04:31:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.704 04:31:20 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:17.704 04:31:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.704 04:31:20 -- common/autotest_common.sh@10 -- # set +x 00:16:17.704 [2024-12-07 04:31:20.656955] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:17.704 04:31:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.704 04:31:20 -- host/digest.sh@104 -- # common_target_config 00:16:17.704 04:31:20 -- host/digest.sh@43 -- # rpc_cmd 00:16:17.704 04:31:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.704 04:31:20 -- common/autotest_common.sh@10 -- # set +x 00:16:17.704 null0 00:16:17.704 [2024-12-07 04:31:20.726827] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.704 [2024-12-07 04:31:20.750958] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.704 04:31:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.704 04:31:20 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:17.704 04:31:20 -- host/digest.sh@54 -- # local rw bs qd 00:16:17.704 04:31:20 -- host/digest.sh@56 -- # rw=randread 00:16:17.704 04:31:20 -- host/digest.sh@56 -- # bs=4096 00:16:17.704 04:31:20 -- host/digest.sh@56 -- # qd=128 00:16:17.704 04:31:20 -- host/digest.sh@58 -- # bperfpid=71895 00:16:17.704 04:31:20 -- host/digest.sh@60 -- # waitforlisten 71895 /var/tmp/bperf.sock 00:16:17.704 04:31:20 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:17.704 04:31:20 -- common/autotest_common.sh@829 -- # '[' -z 71895 ']' 00:16:17.704 04:31:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:17.704 04:31:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:17.704 04:31:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:17.704 04:31:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.704 04:31:20 -- common/autotest_common.sh@10 -- # set +x 00:16:17.704 [2024-12-07 04:31:20.809285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.704 [2024-12-07 04:31:20.809406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71895 ] 00:16:17.963 [2024-12-07 04:31:20.950561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.963 [2024-12-07 04:31:21.019203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.530 04:31:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.530 04:31:21 -- common/autotest_common.sh@862 -- # return 0 00:16:18.530 04:31:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:18.530 04:31:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:18.789 04:31:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:18.789 04:31:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.789 04:31:22 -- common/autotest_common.sh@10 -- # set +x 00:16:18.789 04:31:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.789 04:31:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:18.789 04:31:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:19.046 nvme0n1 00:16:19.304 04:31:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:19.305 04:31:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.305 04:31:22 -- common/autotest_common.sh@10 -- # set +x 00:16:19.305 04:31:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.305 04:31:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:19.305 04:31:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:19.305 Running I/O for 2 seconds... 00:16:19.305 [2024-12-07 04:31:22.420029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.420093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.420124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.435592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.435672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.435702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.450553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.450603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.450632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.465700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.465749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.465777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.480817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.480869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.480897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.495847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.495897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.495925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.511963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.512014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.512042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.526935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.526985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.527013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.305 [2024-12-07 04:31:22.542476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.305 [2024-12-07 04:31:22.542525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.305 [2024-12-07 04:31:22.542552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.558170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.563 [2024-12-07 04:31:22.558218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.563 [2024-12-07 04:31:22.558246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.573329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.563 [2024-12-07 04:31:22.573378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.563 [2024-12-07 04:31:22.573405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.589367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.563 [2024-12-07 04:31:22.589418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.563 [2024-12-07 04:31:22.589446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.605474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.563 [2024-12-07 04:31:22.605524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.563 [2024-12-07 04:31:22.605552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.621518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.563 [2024-12-07 04:31:22.621569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.563 [2024-12-07 04:31:22.621598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.639268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.563 [2024-12-07 04:31:22.639333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.563 [2024-12-07 04:31:22.639368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.655419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.563 [2024-12-07 04:31:22.655471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.563 [2024-12-07 04:31:22.655500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.563 [2024-12-07 04:31:22.670636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.670711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.670740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.685576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.685625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.685661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.700424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.700474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.700501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.715324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.715397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.715425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.730308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.730358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.730386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.745248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.745297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.745325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.760362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.760411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.760440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.775429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.775479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.564 [2024-12-07 04:31:22.790323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.564 [2024-12-07 04:31:22.790372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.564 [2024-12-07 04:31:22.790399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.806638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.806697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.806726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.822424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.822486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.822514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.837113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.837162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.837190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.852307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.852356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.852384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.868417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.868465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.868493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.884332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.884380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.884408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.899240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.899288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.899315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.914234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.914284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.914312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.929411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.929460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.929487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.944806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.944869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.944897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.959920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.959968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.959996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.975085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.975133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.975161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:22.992060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:22.992109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:22.992137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:23.008307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:23.008356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:23.008383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:23.024386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:23.024435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:23.024474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:23.040439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:23.040488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:23.040516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.824 [2024-12-07 04:31:23.056263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:19.824 [2024-12-07 04:31:23.056312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.824 [2024-12-07 04:31:23.056340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.074043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.074093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.074122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.090483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.090535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.090564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.108351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.108387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.108417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.125905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.125943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.125973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.141952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.142003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.157486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.157537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.157565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.172695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.172746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.172774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.187870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.187919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.187947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.202761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.202811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.202838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.217602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.217677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.217691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.232688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.232737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.232765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.247652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.247729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.247757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.263968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.264018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.264046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.278946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.278994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.279021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.295086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.295151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.295196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.084 [2024-12-07 04:31:23.311733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.084 [2024-12-07 04:31:23.311798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.084 [2024-12-07 04:31:23.311826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.342 [2024-12-07 04:31:23.329679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.342 [2024-12-07 04:31:23.329752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.342 [2024-12-07 04:31:23.329782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.342 [2024-12-07 04:31:23.345957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.342 [2024-12-07 04:31:23.346009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.342 [2024-12-07 04:31:23.346037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.342 [2024-12-07 04:31:23.362884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.342 [2024-12-07 04:31:23.362936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.342 [2024-12-07 04:31:23.362965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.342 [2024-12-07 04:31:23.378912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.378960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.378987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.394736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.394785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.394813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.417362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.417413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.417441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.432805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.432854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.432881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.448249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.448299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.448326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.463443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.463494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.463523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.478460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.478508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.478536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.493469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.493517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.493545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.508564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.508613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.508640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.523635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.523713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.523758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.539522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.539574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.539587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.555490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.555541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.555570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.343 [2024-12-07 04:31:23.570840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.343 [2024-12-07 04:31:23.570890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.343 [2024-12-07 04:31:23.570918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.587368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.587420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.587433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.602372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.602421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.602448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.617562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.617612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.617640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.633136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.633219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.633249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.650392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.650442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.650470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.667101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.667151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.667179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.682039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.682088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.682116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.697060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.697109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.697136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.712553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.712602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.712630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.727607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.727685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.727731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.742584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.742666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.742681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.757162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.757210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.757237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.772282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.772332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.772360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.788407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.788455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.788483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.804153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.804202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.804244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.819104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.819154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.819182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.602 [2024-12-07 04:31:23.834410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.602 [2024-12-07 04:31:23.834460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.602 [2024-12-07 04:31:23.834488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.850919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.850968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.850997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.866188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.866237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.866264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.881494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.881543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.881570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.896703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.896760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.896787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.912019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.912068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.912095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.927061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.927109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.927136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.942435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.942484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.942512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.957663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.957701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.957729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.973254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.973304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.973331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:23.989416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:23.989465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:23.989493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:24.005002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:24.005051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:24.005078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:24.019950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:24.020001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:24.020029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:24.034940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:24.034991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:24.035020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:24.049894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:24.049944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:24.049972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:24.065145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:24.065194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:24.065222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:24.080078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:24.080127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:24.080155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.861 [2024-12-07 04:31:24.094960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:20.861 [2024-12-07 04:31:24.095010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.861 [2024-12-07 04:31:24.095038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.112316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.112366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.112395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.129275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.129327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.129339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.146722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.146773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.146802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.162770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.162819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.162848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.178811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.178860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.178888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.195835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.195901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.195931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.211640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.211714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.211756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.227180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.227229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.227257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.242611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.242703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.242735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.257936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.257985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.258013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.273153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.273201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.273229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.288382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.288430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.288458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.303924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.303973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.304016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.319285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.319334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.319369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.120 [2024-12-07 04:31:24.334413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.120 [2024-12-07 04:31:24.334461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.120 [2024-12-07 04:31:24.334489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.121 [2024-12-07 04:31:24.349601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.121 [2024-12-07 04:31:24.349673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.121 [2024-12-07 04:31:24.349686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.379 [2024-12-07 04:31:24.366234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.379 [2024-12-07 04:31:24.366284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.379 [2024-12-07 04:31:24.366313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.379 [2024-12-07 04:31:24.381405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.379 [2024-12-07 04:31:24.381454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.379 [2024-12-07 04:31:24.381481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.379 [2024-12-07 04:31:24.396587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb94d40) 00:16:21.379 [2024-12-07 04:31:24.396636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.379 [2024-12-07 04:31:24.396688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.379 00:16:21.379 Latency(us) 00:16:21.379 [2024-12-07T04:31:24.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.379 [2024-12-07T04:31:24.619Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:21.379 nvme0n1 : 2.01 16199.49 63.28 0.00 0.00 7896.67 7060.01 30027.40 00:16:21.379 [2024-12-07T04:31:24.619Z] =================================================================================================================== 00:16:21.379 [2024-12-07T04:31:24.619Z] Total : 16199.49 63.28 0.00 0.00 7896.67 7060.01 30027.40 00:16:21.379 0 00:16:21.379 04:31:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:21.379 04:31:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:21.379 04:31:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:21.379 04:31:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:21.379 | .driver_specific 00:16:21.379 | .nvme_error 00:16:21.379 | .status_code 00:16:21.379 | .command_transient_transport_error' 00:16:21.637 04:31:24 -- host/digest.sh@71 -- # (( 127 > 0 )) 00:16:21.637 04:31:24 -- host/digest.sh@73 -- # killprocess 71895 00:16:21.637 04:31:24 -- common/autotest_common.sh@936 -- # '[' -z 71895 ']' 00:16:21.637 04:31:24 -- common/autotest_common.sh@940 -- # kill -0 71895 00:16:21.637 04:31:24 -- common/autotest_common.sh@941 -- # uname 00:16:21.637 04:31:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.637 04:31:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71895 00:16:21.637 04:31:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:21.637 killing process with pid 71895 00:16:21.637 04:31:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:21.637 04:31:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71895' 00:16:21.637 Received shutdown signal, test time was about 2.000000 seconds 00:16:21.637 00:16:21.637 Latency(us) 00:16:21.637 [2024-12-07T04:31:24.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.637 [2024-12-07T04:31:24.877Z] =================================================================================================================== 00:16:21.637 [2024-12-07T04:31:24.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.637 04:31:24 -- common/autotest_common.sh@955 -- # kill 71895 00:16:21.637 04:31:24 -- common/autotest_common.sh@960 -- # wait 71895 00:16:21.896 04:31:24 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:21.896 04:31:24 -- host/digest.sh@54 -- # local rw bs qd 00:16:21.896 04:31:24 -- host/digest.sh@56 -- # rw=randread 00:16:21.896 04:31:24 -- host/digest.sh@56 -- # bs=131072 00:16:21.896 04:31:24 -- host/digest.sh@56 -- # qd=16 00:16:21.896 04:31:24 -- host/digest.sh@58 -- # bperfpid=71951 00:16:21.896 04:31:24 -- host/digest.sh@60 -- # waitforlisten 71951 /var/tmp/bperf.sock 00:16:21.896 04:31:24 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:21.896 04:31:24 -- common/autotest_common.sh@829 -- # '[' -z 71951 ']' 00:16:21.896 04:31:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:21.896 04:31:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:21.896 04:31:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:21.896 04:31:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.896 04:31:24 -- common/autotest_common.sh@10 -- # set +x 00:16:21.896 [2024-12-07 04:31:24.967606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:21.896 [2024-12-07 04:31:24.967757] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71951 ] 00:16:21.896 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:21.896 Zero copy mechanism will not be used. 00:16:21.896 [2024-12-07 04:31:25.103851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.154 [2024-12-07 04:31:25.159263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.720 04:31:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.720 04:31:25 -- common/autotest_common.sh@862 -- # return 0 00:16:22.720 04:31:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:22.720 04:31:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:22.978 04:31:26 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:22.978 04:31:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.978 04:31:26 -- common/autotest_common.sh@10 -- # set +x 00:16:22.978 04:31:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.978 04:31:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.978 04:31:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:23.236 nvme0n1 00:16:23.236 04:31:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:23.236 04:31:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.236 04:31:26 -- common/autotest_common.sh@10 -- # set +x 00:16:23.236 04:31:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.236 04:31:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:23.236 04:31:26 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:23.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:23.495 Zero copy mechanism will not be used. 00:16:23.495 Running I/O for 2 seconds... 00:16:23.495 [2024-12-07 04:31:26.562228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.495 [2024-12-07 04:31:26.562296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.495 [2024-12-07 04:31:26.562327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.495 [2024-12-07 04:31:26.566407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.495 [2024-12-07 04:31:26.566461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.495 [2024-12-07 04:31:26.566490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.495 [2024-12-07 04:31:26.570534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.495 [2024-12-07 04:31:26.570585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.495 [2024-12-07 04:31:26.570614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.495 [2024-12-07 04:31:26.574737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.495 [2024-12-07 04:31:26.574798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.495 [2024-12-07 04:31:26.574828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.495 [2024-12-07 04:31:26.578703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.495 [2024-12-07 04:31:26.578753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.495 [2024-12-07 04:31:26.578781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.495 [2024-12-07 04:31:26.582632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.495 [2024-12-07 04:31:26.582696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.495 [2024-12-07 04:31:26.582726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.495 [2024-12-07 04:31:26.586606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.495 [2024-12-07 04:31:26.586683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.586697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.590865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.590917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.590946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.594794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.594843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.594873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.598738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.598788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.598816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.602809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.602860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.602889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.606758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.606808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.606836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.610800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.610852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.610880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.614721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.614770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.614798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.618870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.618921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.618950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.622980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.623044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.623071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.627072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.627121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.627149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.631113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.631163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.631192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.635066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.635115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.635143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.638958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.639007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.639036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.642869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.642918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.642946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.646715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.646763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.646790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.650650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.650710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.650738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.654848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.654899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.654942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.658866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.658916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.658945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.662920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.662968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.662995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.666735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.666783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.666811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.670730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.670779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.670806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.674667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.674729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.674757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.678680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.678741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.678769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.683137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.683194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.683207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.688076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.688118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.688148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.692613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.692677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.496 [2024-12-07 04:31:26.692708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.496 [2024-12-07 04:31:26.697266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.496 [2024-12-07 04:31:26.697318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.697348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.701689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.701750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.706263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.706317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.706346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.710668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.710728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.710757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.715028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.715066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.715080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.719531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.719572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.719586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.723797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.723856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.723885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.727995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.728032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.728060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.497 [2024-12-07 04:31:26.732601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.497 [2024-12-07 04:31:26.732675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.497 [2024-12-07 04:31:26.732689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.737208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.737261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.737289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.741705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.741784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.741814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.746117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.746202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.746214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.750484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.750537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.750565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.754860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.754899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.754913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.758951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.759016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.759046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.763163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.763216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.763260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.767447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.767501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.767515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.771800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.771863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.771892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.775936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.775989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.776018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.780023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.780074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.784048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.784100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.784129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.788056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.788108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.788137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.792097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.792150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.792178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.796106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.796159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.796187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.800357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.800409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.800436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.804491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.804543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.804572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.808502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.808554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.808582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.812736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.812786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.758 [2024-12-07 04:31:26.812830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.758 [2024-12-07 04:31:26.816713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.758 [2024-12-07 04:31:26.816764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.816793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.820691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.820742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.820770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.824894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.824947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.824976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.829006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.829058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.829086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.833123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.833192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.833220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.837385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.837439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.837468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.841544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.841595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.841623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.845572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.845623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.845651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.849652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.849712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.849741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.853828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.853878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.853906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.857966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.858018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.858045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.862309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.862361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.862390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.866513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.866565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.866593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.870624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.870721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.870735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.874691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.874743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.874772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.878935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.878988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.879016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.883001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.883052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.883064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.887185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.887252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.887279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.891769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.891805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.891845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.896055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.896092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.896120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.900294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.900344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.900372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.904397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.904447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.904474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.909089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.909143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.909188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.913358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.913408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.913436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.917662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.917721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.917750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.759 [2024-12-07 04:31:26.921834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.759 [2024-12-07 04:31:26.921883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.759 [2024-12-07 04:31:26.921911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.925857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.925908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.925937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.929755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.929804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.929832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.933780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.933829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.933857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.937906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.937956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.937984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.942571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.942625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.942683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.946817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.946895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.950746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.950795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.950823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.954559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.954608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.954636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.958683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.958731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.958759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.962635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.962710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.962739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.966686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.966747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.966775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.970617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.970694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.970724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.974493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.974542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.974570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.978510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.978560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.978588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.982457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.982507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.982535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.986359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.986408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.986436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.760 [2024-12-07 04:31:26.990451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:23.760 [2024-12-07 04:31:26.990502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.760 [2024-12-07 04:31:26.990531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:26.995299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:26.995341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:26.995354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:26.999813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:26.999866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:26.999895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.004288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.004342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.004371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.008748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.008799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.008828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.012939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.012990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.013019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.017041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.017093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.017121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.021061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.021111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.021138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.024994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.025044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.025073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.029070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.029121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.029149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.033046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.033098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.033126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.037168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.037219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.037246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.041229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.041279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.041306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.045332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.045382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.045411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.049349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.049399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.049427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.053413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.053464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.053492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.057559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.032 [2024-12-07 04:31:27.057609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.032 [2024-12-07 04:31:27.057637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.032 [2024-12-07 04:31:27.061615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.061677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.061706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.065712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.065762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.065790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.069772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.069837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.069865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.073701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.073751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.073778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.077665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.077714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.077741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.081599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.081672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.081686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.085618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.085678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.085706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.089627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.089685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.089712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.093601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.093673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.093687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.097692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.097741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.097768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.101681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.101730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.101757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.105648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.105696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.105723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.109662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.109710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.109738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.113715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.113764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.113791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.117702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.117752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.117780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.121681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.121731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.121759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.125590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.125666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.125680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.129607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.129682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.129695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.133732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.133782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.133810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.137742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.137792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.137819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.141826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.141876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.141904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.145723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.145773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.145801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.149915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.149967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.149996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.153785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.153852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.153879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.157771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.157820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.157864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.162140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.162211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.162224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.166516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.033 [2024-12-07 04:31:27.166567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.033 [2024-12-07 04:31:27.166595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.033 [2024-12-07 04:31:27.170799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.170867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.170880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.174999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.175035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.175048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.179467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.179506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.179519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.183730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.183781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.183821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.188198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.188250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.188278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.192565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.192632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.192661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.197176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.197231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.197244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.201575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.201613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.201627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.205907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.205947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.205960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.210320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.210387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.214684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.214733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.214748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.218883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.218936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.218967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.223131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.223182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.223212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.227248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.227298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.227322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.231307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.231379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.231410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.235317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.235376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.235407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.239706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.239745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.239759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.244130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.244214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.244243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.248598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.248680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.248695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.253523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.253575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.253604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.257692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.257741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.257769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.261730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.261779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.261807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.034 [2024-12-07 04:31:27.265980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.034 [2024-12-07 04:31:27.266030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.034 [2024-12-07 04:31:27.266058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.270344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.270396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.270424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.274452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.274521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.274549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.278692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.278741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.278769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.282667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.282716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.282744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.286561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.286611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.286638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.290542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.290593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.290620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.294515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.294565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.294593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.298460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.298510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.298537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.302403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.302452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.302480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.306473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.306524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.306551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.310484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.310534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.310561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.314519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.314570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.314598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.318478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.318528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.318556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.322634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.322694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.322721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.326619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.326679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.326707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.330687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.330735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.330763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.334614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.334675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.334703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.338610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.338683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.338696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.342580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.342630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.342668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.346544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.346593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.346621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.350529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.350579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.350607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.354444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.354493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.354521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.294 [2024-12-07 04:31:27.358466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.294 [2024-12-07 04:31:27.358518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.294 [2024-12-07 04:31:27.358546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.362446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.362496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.362524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.366411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.366461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.366489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.370543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.370636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.374912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.374965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.374978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.379169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.379250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.379262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.383578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.383619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.383632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.388214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.388264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.388293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.392764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.392813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.392842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.397310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.397360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.397389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.401817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.401872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.401886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.406482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.406534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.406563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.410912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.410952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.410966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.415427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.415465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.415479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.419991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.420031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.420046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.424601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.424677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.424692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.429162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.429244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.429256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.433593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.433666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.433680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.437993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.438031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.438045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.442511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.442561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.442589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.446892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.446929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.446942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.451010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.451062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.451075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.455228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.455278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.455306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.459365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.459420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.459433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.463424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.463463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.295 [2024-12-07 04:31:27.463476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.295 [2024-12-07 04:31:27.468007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.295 [2024-12-07 04:31:27.468046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.468060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.472582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.472651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.472666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.476732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.476783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.476811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.480725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.480775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.480803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.484727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.484777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.484804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.488607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.488682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.488696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.492571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.492621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.492649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.496665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.496744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.496773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.501219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.501286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.501315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.505354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.505405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.505433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.509336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.509386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.509413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.513399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.513449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.513477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.517433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.517484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.517512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.521521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.521572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.521599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.525503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.525553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.525581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.296 [2024-12-07 04:31:27.529914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.296 [2024-12-07 04:31:27.529966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.296 [2024-12-07 04:31:27.529994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.534126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.534193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.534222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.538502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.538552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.538581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.542423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.542473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.542501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.546513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.546564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.546591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.550568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.550618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.550647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.554532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.554583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.554611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.558496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.558546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.558574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.562481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.562531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.562559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.566638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.566714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.566726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.570654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.570713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.570741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.574532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.574583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.574610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.578466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.578515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.578543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.582557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.582608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.582636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.586597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.586671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.586684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.590528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.590579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.590607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.594567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.594618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.594647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.598495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.598546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.598574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.602475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.602526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.602554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.606437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.606488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.606515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.610424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.610475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.610502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.614435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.614485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.614513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.618326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.618377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.618420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.622556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.622606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.622635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.626931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.627004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.627025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.631630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.631679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.631693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.636276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.636327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.636355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.640554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.640605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.640633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.644891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.644943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.644972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.649499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.649550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.649578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.653937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.653989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.590 [2024-12-07 04:31:27.654018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.590 [2024-12-07 04:31:27.658257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.590 [2024-12-07 04:31:27.658308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.658336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.662390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.662440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.662468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.666399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.666449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.666477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.670568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.670620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.670649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.674782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.674832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.674859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.678730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.678780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.678808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.682590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.682666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.682681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.686655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.686704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.686732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.690914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.690964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.690992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.694917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.694967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.694995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.698841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.698891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.698919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.702966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.703017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.703046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.707007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.707058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.707086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.711205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.711255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.711283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.715510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.715563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.715577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.720060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.720101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.720116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.724650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.724708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.724737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.729168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.729222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.729235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.733627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.733687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.733716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.738077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.738116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.738130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.742603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.742677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.742691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.747080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.747133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.747146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.751344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.751419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.751433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.755638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.755733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.755746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.760045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.760096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.760125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.764408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.764458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.764487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.768543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.768593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.768621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.772638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.772699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.772728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.776852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.776902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.776930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.780834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.780884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.780912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.784778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.784827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.784855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.788750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.788800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.788828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.792925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.792976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.793004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.797053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.797103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.797130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.801095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.801144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.801172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.805051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.805099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.805126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.809048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.809098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.809125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.813082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.813132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.813160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.817046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.817103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.817133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.821033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.821082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.821109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.591 [2024-12-07 04:31:27.825257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.591 [2024-12-07 04:31:27.825305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.591 [2024-12-07 04:31:27.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.829506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.829555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.829583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.833981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.834031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.834074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.838061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.838110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.838138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.842111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.842161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.842189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.846090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.846140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.846167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.850091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.850141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.850168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.854018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.854082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.854109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.858092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.858142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.858170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.861975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.862025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.862053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.865941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.865992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.850 [2024-12-07 04:31:27.866020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.850 [2024-12-07 04:31:27.869944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.850 [2024-12-07 04:31:27.869995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.870023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.874075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.874125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.874153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.878148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.878200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.878242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.882265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.882314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.882341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.886310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.886359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.886387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.890342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.890391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.890419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.894538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.894588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.894616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.898858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.898894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.898923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.903121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.903158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.903187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.907762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.907828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.907842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.912356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.912407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.912419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.916799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.916869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.916883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.921265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.921316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.921328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.925683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.925743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.925756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.929906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.929958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.929971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.934237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.934289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.934301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.938481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.938531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.938543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.942613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.942689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.942702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.946859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.946911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.946924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.950885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.950936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.950948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.954845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.954896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.954909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.958860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.958910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.958922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.962889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.962941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.962953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.966851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.966902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.966914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.970862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.970913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.970925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.975082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.975133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.975145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.851 [2024-12-07 04:31:27.979004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.851 [2024-12-07 04:31:27.979071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.851 [2024-12-07 04:31:27.979083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:27.983076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:27.983127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:27.983139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:27.987266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:27.987316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:27.987328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:27.991278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:27.991328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:27.991340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:27.995309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:27.995365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:27.995395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:27.999315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:27.999390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:27.999420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.003462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.003515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.003529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.007545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.007598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.007612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.011567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.011605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.011619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.016159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.016224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.016252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.020537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.020589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.020601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.024658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.024719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.024732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.029016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.029066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.029078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.033160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.033211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.033223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.037258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.037308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.037320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.041378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.041429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.041441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.045530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.045581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.045593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.049696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.049746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.049757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.054030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.054065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.054078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.058080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.058115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.058127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.062115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.062150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.062163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.066122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.066171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.066183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.069981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.070032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.070044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.074061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.074111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.074123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.077973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.078023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.078035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.081895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.081945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.081957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.852 [2024-12-07 04:31:28.086301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:24.852 [2024-12-07 04:31:28.086384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.852 [2024-12-07 04:31:28.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.111 [2024-12-07 04:31:28.090612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.090670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.090683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.095056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.095106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.095117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.099020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.099070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.099082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.102882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.102931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.102943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.106731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.106780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.106792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.110585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.110634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.110657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.114572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.114622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.114634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.118584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.118634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.118657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.122474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.122524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.122535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.126423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.126473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.126485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.130363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.130413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.130425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.134378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.134429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.134441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.138472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.138523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.138535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.142369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.142435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.142447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.146381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.146431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.146443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.150405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.150455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.150467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.154443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.154492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.154521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.158551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.158602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.158614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.162614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.162674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.162687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.166570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.166620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.166632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.170616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.170688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.170717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.175329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.175405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.175419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.179566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.179618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.183811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.183877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.183890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.188114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.188221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.188251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.192393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.192444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.112 [2024-12-07 04:31:28.192472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.112 [2024-12-07 04:31:28.196594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.112 [2024-12-07 04:31:28.196669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.196700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.200906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.200957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.200987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.205284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.205335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.205363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.209850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.209902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.209932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.214363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.214416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.214445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.218894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.218947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.218962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.223290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.223340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.223375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.227446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.227485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.227498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.231722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.231789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.231830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.235855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.235905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.239972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.240038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.240066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.244052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.244101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.244130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.248166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.248215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.248242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.252118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.252168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.252196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.256069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.256118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.256146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.260059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.260108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.260135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.264066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.264115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.264143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.268060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.268107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.268135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.272603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.272666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.272680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.277048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.277098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.277127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.281171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.281235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.281263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.285225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.285274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.285302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.289250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.289299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.289327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.293407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.293457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.293485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.297499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.297549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.297577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.301625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.113 [2024-12-07 04:31:28.301683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.113 [2024-12-07 04:31:28.301712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.113 [2024-12-07 04:31:28.305684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.305742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.305771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.309715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.309763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.309790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.313742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.313790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.313818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.317834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.317884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.317911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.321925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.321976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.322004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.325967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.326044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.329970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.330019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.330047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.333955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.334004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.334033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.337988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.338037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.338065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.341947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.341998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.342025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.114 [2024-12-07 04:31:28.346128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.114 [2024-12-07 04:31:28.346179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.114 [2024-12-07 04:31:28.346207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.350671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.350733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.350762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.354898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.354949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.354978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.359019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.359083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.359110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.362985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.363034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.363046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.366966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.367016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.367028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.370719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.370767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.370795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.374595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.374668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.374682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.378605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.378678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.378708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.382599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.382672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.382686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.386679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.386740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.386769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.391152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.391202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.391215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.395580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.395619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.400020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.400060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.400074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.404574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.404666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.404680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.409161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.409242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.409269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.413872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.413909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.413938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.418408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.418459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.418487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.373 [2024-12-07 04:31:28.422928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.373 [2024-12-07 04:31:28.422966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.373 [2024-12-07 04:31:28.422995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.427309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.427381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.427412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.431823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.431859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.431888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.436216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.436267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.436294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.440622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.440693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.440721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.445259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.445309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.445336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.449647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.449690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.449719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.454305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.454354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.454382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.458820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.458874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.458888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.463316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.463372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.463402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.467815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.467880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.472315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.472349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.472376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.476631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.476674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.476702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.480911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.480946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.480974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.485063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.485098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.485125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.489141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.489190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.489217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.493183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.493247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.493274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.497132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.497165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.497192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.501114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.501147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.501174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.505036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.505069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.505097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.508935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.508968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.508995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.512907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.512940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.512968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.516871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.516903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.516930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.520875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.520909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.520937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.524798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.524847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.524874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.529188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.529255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.529283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.533783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.533816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.533843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.537788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.537821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.537847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.541724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.541757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.541784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.545739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.545772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.545799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.549724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.549757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.549784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.553663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.553705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.553733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.374 [2024-12-07 04:31:28.557579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14bb940) 00:16:25.374 [2024-12-07 04:31:28.557612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.374 [2024-12-07 04:31:28.557640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.374 00:16:25.374 Latency(us) 00:16:25.374 [2024-12-07T04:31:28.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.374 [2024-12-07T04:31:28.614Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:25.374 nvme0n1 : 2.00 7433.88 929.24 0.00 0.00 2149.10 1653.29 4855.62 00:16:25.374 [2024-12-07T04:31:28.614Z] =================================================================================================================== 00:16:25.374 [2024-12-07T04:31:28.614Z] Total : 7433.88 929.24 0.00 0.00 2149.10 1653.29 4855.62 00:16:25.374 0 00:16:25.374 04:31:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:25.374 04:31:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:25.374 04:31:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:25.374 | .driver_specific 00:16:25.374 | .nvme_error 00:16:25.374 | .status_code 00:16:25.374 | .command_transient_transport_error' 00:16:25.374 04:31:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:25.939 04:31:28 -- host/digest.sh@71 -- # (( 480 > 0 )) 00:16:25.939 04:31:28 -- host/digest.sh@73 -- # killprocess 71951 00:16:25.939 04:31:28 -- common/autotest_common.sh@936 -- # '[' -z 71951 ']' 00:16:25.939 04:31:28 -- common/autotest_common.sh@940 -- # kill -0 71951 00:16:25.939 04:31:28 -- common/autotest_common.sh@941 -- # uname 00:16:25.939 04:31:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.939 04:31:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71951 00:16:25.939 04:31:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:25.939 04:31:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:25.939 killing process with pid 71951 00:16:25.939 04:31:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71951' 00:16:25.939 04:31:28 -- common/autotest_common.sh@955 -- # kill 71951 00:16:25.939 Received shutdown signal, test time was about 2.000000 seconds 00:16:25.939 00:16:25.939 Latency(us) 00:16:25.939 [2024-12-07T04:31:29.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.939 [2024-12-07T04:31:29.179Z] =================================================================================================================== 00:16:25.939 [2024-12-07T04:31:29.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.939 04:31:28 -- common/autotest_common.sh@960 -- # wait 71951 00:16:25.939 04:31:29 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:25.939 04:31:29 -- host/digest.sh@54 -- # local rw bs qd 00:16:25.939 04:31:29 -- host/digest.sh@56 -- # rw=randwrite 00:16:25.939 04:31:29 -- host/digest.sh@56 -- # bs=4096 00:16:25.939 04:31:29 -- host/digest.sh@56 -- # qd=128 00:16:25.939 04:31:29 -- host/digest.sh@58 -- # bperfpid=72011 00:16:25.939 04:31:29 -- host/digest.sh@60 -- # waitforlisten 72011 /var/tmp/bperf.sock 00:16:25.939 04:31:29 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:25.939 04:31:29 -- common/autotest_common.sh@829 -- # '[' -z 72011 ']' 00:16:25.939 04:31:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.939 04:31:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.939 04:31:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.939 04:31:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.939 04:31:29 -- common/autotest_common.sh@10 -- # set +x 00:16:25.939 [2024-12-07 04:31:29.152326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:25.939 [2024-12-07 04:31:29.152435] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72011 ] 00:16:26.198 [2024-12-07 04:31:29.282432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.198 [2024-12-07 04:31:29.334782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.131 04:31:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.131 04:31:30 -- common/autotest_common.sh@862 -- # return 0 00:16:27.131 04:31:30 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:27.131 04:31:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:27.131 04:31:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:27.131 04:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.131 04:31:30 -- common/autotest_common.sh@10 -- # set +x 00:16:27.389 04:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.389 04:31:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.389 04:31:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.647 nvme0n1 00:16:27.647 04:31:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:27.647 04:31:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.647 04:31:30 -- common/autotest_common.sh@10 -- # set +x 00:16:27.647 04:31:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.647 04:31:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:27.647 04:31:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:27.647 Running I/O for 2 seconds... 00:16:27.647 [2024-12-07 04:31:30.766378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ddc00 00:16:27.647 [2024-12-07 04:31:30.767951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.647 [2024-12-07 04:31:30.768026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.647 [2024-12-07 04:31:30.782499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fef90 00:16:27.647 [2024-12-07 04:31:30.784016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.647 [2024-12-07 04:31:30.784087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.647 [2024-12-07 04:31:30.798458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ff3c8 00:16:27.647 [2024-12-07 04:31:30.800016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.647 [2024-12-07 04:31:30.800067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:27.647 [2024-12-07 04:31:30.812768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190feb58 00:16:27.647 [2024-12-07 04:31:30.814092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.647 [2024-12-07 04:31:30.814140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:27.647 [2024-12-07 04:31:30.826765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fe720 00:16:27.647 [2024-12-07 04:31:30.828099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.647 [2024-12-07 04:31:30.828147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:27.647 [2024-12-07 04:31:30.840679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fe2e8 00:16:27.647 [2024-12-07 04:31:30.841970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.647 [2024-12-07 04:31:30.842018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:27.648 [2024-12-07 04:31:30.854504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fdeb0 00:16:27.648 [2024-12-07 04:31:30.855867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.648 [2024-12-07 04:31:30.855914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:27.648 [2024-12-07 04:31:30.868383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fda78 00:16:27.648 [2024-12-07 04:31:30.869623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.648 [2024-12-07 04:31:30.869712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:27.648 [2024-12-07 04:31:30.882385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fd640 00:16:27.648 [2024-12-07 04:31:30.883790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.648 [2024-12-07 04:31:30.883857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:27.907 [2024-12-07 04:31:30.897176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fd208 00:16:27.907 [2024-12-07 04:31:30.898417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.907 [2024-12-07 04:31:30.898466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:27.907 [2024-12-07 04:31:30.911151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fcdd0 00:16:27.907 [2024-12-07 04:31:30.912439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:30.912489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:30.925358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fc998 00:16:27.908 [2024-12-07 04:31:30.926610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:30.926686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:30.939498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fc560 00:16:27.908 [2024-12-07 04:31:30.940751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:30.940825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:30.953482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fc128 00:16:27.908 [2024-12-07 04:31:30.954695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:30.954771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:30.967436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fbcf0 00:16:27.908 [2024-12-07 04:31:30.968666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:30.968741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:30.981793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fb8b8 00:16:27.908 [2024-12-07 04:31:30.982959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:30.983006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:30.995958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fb480 00:16:27.908 [2024-12-07 04:31:30.997198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:30.997261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.010126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fb048 00:16:27.908 [2024-12-07 04:31:31.011303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.011351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.024961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fac10 00:16:27.908 [2024-12-07 04:31:31.026338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.026402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.039591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fa7d8 00:16:27.908 [2024-12-07 04:31:31.040925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.040974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.054855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190fa3a0 00:16:27.908 [2024-12-07 04:31:31.056208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.056259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.071720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f9f68 00:16:27.908 [2024-12-07 04:31:31.073060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.073096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.088492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f9b30 00:16:27.908 [2024-12-07 04:31:31.089743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.089818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.106170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f96f8 00:16:27.908 [2024-12-07 04:31:31.107476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.107513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.122912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f92c0 00:16:27.908 [2024-12-07 04:31:31.124159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.124212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:27.908 [2024-12-07 04:31:31.139817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f8e88 00:16:27.908 [2024-12-07 04:31:31.141071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.908 [2024-12-07 04:31:31.141106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.156264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f8a50 00:16:28.168 [2024-12-07 04:31:31.157371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.157406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.171182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f8618 00:16:28.168 [2024-12-07 04:31:31.172312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.172346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.186103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f81e0 00:16:28.168 [2024-12-07 04:31:31.187225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.187259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.201063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f7da8 00:16:28.168 [2024-12-07 04:31:31.202107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.202138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.215816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f7970 00:16:28.168 [2024-12-07 04:31:31.216849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.216900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.231017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f7538 00:16:28.168 [2024-12-07 04:31:31.232232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.232265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.246392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f7100 00:16:28.168 [2024-12-07 04:31:31.247443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.247477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.260859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f6cc8 00:16:28.168 [2024-12-07 04:31:31.261875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.261907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.277092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f6890 00:16:28.168 [2024-12-07 04:31:31.278169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.168 [2024-12-07 04:31:31.278200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:28.168 [2024-12-07 04:31:31.292971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f6458 00:16:28.169 [2024-12-07 04:31:31.294034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.294080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:28.169 [2024-12-07 04:31:31.307333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f6020 00:16:28.169 [2024-12-07 04:31:31.308395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.308425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:28.169 [2024-12-07 04:31:31.321326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f5be8 00:16:28.169 [2024-12-07 04:31:31.322295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.322327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:28.169 [2024-12-07 04:31:31.335270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f57b0 00:16:28.169 [2024-12-07 04:31:31.336387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.336420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:28.169 [2024-12-07 04:31:31.349912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f5378 00:16:28.169 [2024-12-07 04:31:31.350903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.350933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:28.169 [2024-12-07 04:31:31.364823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f4f40 00:16:28.169 [2024-12-07 04:31:31.365853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.365890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:28.169 [2024-12-07 04:31:31.379370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f4b08 00:16:28.169 [2024-12-07 04:31:31.380387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.380437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:28.169 [2024-12-07 04:31:31.395774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f46d0 00:16:28.169 [2024-12-07 04:31:31.396816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.169 [2024-12-07 04:31:31.396860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.412437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f4298 00:16:28.429 [2024-12-07 04:31:31.413467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.413514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.427151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f3e60 00:16:28.429 [2024-12-07 04:31:31.428182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.428231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.441774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f3a28 00:16:28.429 [2024-12-07 04:31:31.442756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.442813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.456248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f35f0 00:16:28.429 [2024-12-07 04:31:31.457272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.457324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.471716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f31b8 00:16:28.429 [2024-12-07 04:31:31.472694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.472783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.486534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f2d80 00:16:28.429 [2024-12-07 04:31:31.487516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.487553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.501132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f2948 00:16:28.429 [2024-12-07 04:31:31.502103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.502150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.515846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f2510 00:16:28.429 [2024-12-07 04:31:31.516790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.516846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.530287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f20d8 00:16:28.429 [2024-12-07 04:31:31.531234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.531281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.545753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f1ca0 00:16:28.429 [2024-12-07 04:31:31.546620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.546711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.559649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f1868 00:16:28.429 [2024-12-07 04:31:31.560582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.560628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.573437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f1430 00:16:28.429 [2024-12-07 04:31:31.574338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.574384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.587554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f0ff8 00:16:28.429 [2024-12-07 04:31:31.588448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.588512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.601833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f0bc0 00:16:28.429 [2024-12-07 04:31:31.602667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.602726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.615896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f0788 00:16:28.429 [2024-12-07 04:31:31.616706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.616777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.629782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190f0350 00:16:28.429 [2024-12-07 04:31:31.630582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.630630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.643725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190eff18 00:16:28.429 [2024-12-07 04:31:31.644530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.644577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:28.429 [2024-12-07 04:31:31.657521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190efae0 00:16:28.429 [2024-12-07 04:31:31.658311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.429 [2024-12-07 04:31:31.658360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.672742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ef6a8 00:16:28.689 [2024-12-07 04:31:31.673520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.673568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.686681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ef270 00:16:28.689 [2024-12-07 04:31:31.687504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.700858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190eee38 00:16:28.689 [2024-12-07 04:31:31.701633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.701689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.715083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190eea00 00:16:28.689 [2024-12-07 04:31:31.715899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.715949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.729851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ee5c8 00:16:28.689 [2024-12-07 04:31:31.730740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.730803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.744468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ee190 00:16:28.689 [2024-12-07 04:31:31.745234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.745283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.758596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190edd58 00:16:28.689 [2024-12-07 04:31:31.759317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.759387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.772472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ed920 00:16:28.689 [2024-12-07 04:31:31.773183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.773227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:28.689 [2024-12-07 04:31:31.786681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ed4e8 00:16:28.689 [2024-12-07 04:31:31.787447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.689 [2024-12-07 04:31:31.787484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.802104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ed0b0 00:16:28.690 [2024-12-07 04:31:31.802889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.802924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.818231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ecc78 00:16:28.690 [2024-12-07 04:31:31.818905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.818941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.833303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ec840 00:16:28.690 [2024-12-07 04:31:31.834019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.834069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.847308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ec408 00:16:28.690 [2024-12-07 04:31:31.848043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.848091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.861346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ebfd0 00:16:28.690 [2024-12-07 04:31:31.862023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.862072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.875595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ebb98 00:16:28.690 [2024-12-07 04:31:31.876297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.876361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.890773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190eb760 00:16:28.690 [2024-12-07 04:31:31.891448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.891483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.905428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190eb328 00:16:28.690 [2024-12-07 04:31:31.906085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.906135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:28.690 [2024-12-07 04:31:31.919643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190eaef0 00:16:28.690 [2024-12-07 04:31:31.920338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.690 [2024-12-07 04:31:31.920386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:31.935220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190eaab8 00:16:28.950 [2024-12-07 04:31:31.935833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:31.935869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:31.949760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ea680 00:16:28.950 [2024-12-07 04:31:31.950375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:31.950425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:31.964181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190ea248 00:16:28.950 [2024-12-07 04:31:31.964759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:31.964804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:31.978334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e9e10 00:16:28.950 [2024-12-07 04:31:31.978935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:31.978972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:31.993589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e99d8 00:16:28.950 [2024-12-07 04:31:31.994178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:31.994213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.007922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e95a0 00:16:28.950 [2024-12-07 04:31:32.008474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.008509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.022076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e9168 00:16:28.950 [2024-12-07 04:31:32.022618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.022663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.036348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e8d30 00:16:28.950 [2024-12-07 04:31:32.036909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.036959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.050580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e88f8 00:16:28.950 [2024-12-07 04:31:32.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.051132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.064687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e84c0 00:16:28.950 [2024-12-07 04:31:32.065205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.065242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.078945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e8088 00:16:28.950 [2024-12-07 04:31:32.079456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.079492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.093302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e7c50 00:16:28.950 [2024-12-07 04:31:32.093807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.093844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.107326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e7818 00:16:28.950 [2024-12-07 04:31:32.107865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.107900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.121511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e73e0 00:16:28.950 [2024-12-07 04:31:32.121999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.122037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.136752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e6fa8 00:16:28.950 [2024-12-07 04:31:32.137249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.137286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.151130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e6b70 00:16:28.950 [2024-12-07 04:31:32.151586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.151623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.165266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e6738 00:16:28.950 [2024-12-07 04:31:32.165692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.165736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.950 [2024-12-07 04:31:32.179143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e6300 00:16:28.950 [2024-12-07 04:31:32.179593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.950 [2024-12-07 04:31:32.179630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.194623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e5ec8 00:16:29.210 [2024-12-07 04:31:32.195043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.195077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.208710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e5a90 00:16:29.210 [2024-12-07 04:31:32.209108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.209144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.222915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e5658 00:16:29.210 [2024-12-07 04:31:32.223303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.223339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.236943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e5220 00:16:29.210 [2024-12-07 04:31:32.237320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.237355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.251141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e4de8 00:16:29.210 [2024-12-07 04:31:32.251543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.251580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.265611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e49b0 00:16:29.210 [2024-12-07 04:31:32.265985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.266020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.280097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e4578 00:16:29.210 [2024-12-07 04:31:32.280439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.280475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.296532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e4140 00:16:29.210 [2024-12-07 04:31:32.296880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.296904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.312860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e3d08 00:16:29.210 [2024-12-07 04:31:32.313160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.313195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.329146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e38d0 00:16:29.210 [2024-12-07 04:31:32.329464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.329507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.344590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e3498 00:16:29.210 [2024-12-07 04:31:32.344916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.344941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.360308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e3060 00:16:29.210 [2024-12-07 04:31:32.360587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.360625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.375927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e2c28 00:16:29.210 [2024-12-07 04:31:32.376262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.376299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.392118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e27f0 00:16:29.210 [2024-12-07 04:31:32.392414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.392441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.407171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e23b8 00:16:29.210 [2024-12-07 04:31:32.407437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.407475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.421861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e1f80 00:16:29.210 [2024-12-07 04:31:32.422092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.422114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:29.210 [2024-12-07 04:31:32.436832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e1b48 00:16:29.210 [2024-12-07 04:31:32.437053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.210 [2024-12-07 04:31:32.437090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.452533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e1710 00:16:29.470 [2024-12-07 04:31:32.452806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.452828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.467269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e12d8 00:16:29.470 [2024-12-07 04:31:32.467501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.467523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.481834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e0ea0 00:16:29.470 [2024-12-07 04:31:32.482012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.482048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.496246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e0a68 00:16:29.470 [2024-12-07 04:31:32.496425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.496445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.510614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e0630 00:16:29.470 [2024-12-07 04:31:32.510792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.510812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.525349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190e01f8 00:16:29.470 [2024-12-07 04:31:32.525507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.525528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.539798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190dfdc0 00:16:29.470 [2024-12-07 04:31:32.539932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.539951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.555915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190df988 00:16:29.470 [2024-12-07 04:31:32.556054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.556078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.571663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190df550 00:16:29.470 [2024-12-07 04:31:32.571824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.571877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.587042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190df118 00:16:29.470 [2024-12-07 04:31:32.587155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.587175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.601842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190dece0 00:16:29.470 [2024-12-07 04:31:32.601942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.601961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.617256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190de8a8 00:16:29.470 [2024-12-07 04:31:32.617350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.617370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.633872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190de038 00:16:29.470 [2024-12-07 04:31:32.633956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.633978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.655015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190de038 00:16:29.470 [2024-12-07 04:31:32.656378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.656412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.670051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190de470 00:16:29.470 [2024-12-07 04:31:32.671468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.671504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.684637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190de8a8 00:16:29.470 [2024-12-07 04:31:32.685998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.686046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:29.470 [2024-12-07 04:31:32.699123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190dece0 00:16:29.470 [2024-12-07 04:31:32.700476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.470 [2024-12-07 04:31:32.700525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:29.728 [2024-12-07 04:31:32.714957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190df118 00:16:29.728 [2024-12-07 04:31:32.716324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.728 [2024-12-07 04:31:32.716374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:29.728 [2024-12-07 04:31:32.729386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190df550 00:16:29.728 [2024-12-07 04:31:32.730716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.728 [2024-12-07 04:31:32.730793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:29.728 [2024-12-07 04:31:32.744191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60dc0) with pdu=0x2000190df988 00:16:29.728 [2024-12-07 04:31:32.745467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.729 [2024-12-07 04:31:32.745515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:29.729 00:16:29.729 Latency(us) 00:16:29.729 [2024-12-07T04:31:32.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.729 [2024-12-07T04:31:32.969Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.729 nvme0n1 : 2.00 17123.15 66.89 0.00 0.00 7469.02 6583.39 20852.36 00:16:29.729 [2024-12-07T04:31:32.969Z] =================================================================================================================== 00:16:29.729 [2024-12-07T04:31:32.969Z] Total : 17123.15 66.89 0.00 0.00 7469.02 6583.39 20852.36 00:16:29.729 0 00:16:29.729 04:31:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:29.729 04:31:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:29.729 04:31:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:29.729 04:31:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:29.729 | .driver_specific 00:16:29.729 | .nvme_error 00:16:29.729 | .status_code 00:16:29.729 | .command_transient_transport_error' 00:16:29.987 04:31:33 -- host/digest.sh@71 -- # (( 134 > 0 )) 00:16:29.987 04:31:33 -- host/digest.sh@73 -- # killprocess 72011 00:16:29.987 04:31:33 -- common/autotest_common.sh@936 -- # '[' -z 72011 ']' 00:16:29.987 04:31:33 -- common/autotest_common.sh@940 -- # kill -0 72011 00:16:29.987 04:31:33 -- common/autotest_common.sh@941 -- # uname 00:16:29.987 04:31:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.987 04:31:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72011 00:16:29.987 04:31:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:29.987 04:31:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:29.987 04:31:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72011' 00:16:29.987 killing process with pid 72011 00:16:29.987 04:31:33 -- common/autotest_common.sh@955 -- # kill 72011 00:16:29.987 Received shutdown signal, test time was about 2.000000 seconds 00:16:29.987 00:16:29.987 Latency(us) 00:16:29.987 [2024-12-07T04:31:33.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.987 [2024-12-07T04:31:33.227Z] =================================================================================================================== 00:16:29.987 [2024-12-07T04:31:33.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.987 04:31:33 -- common/autotest_common.sh@960 -- # wait 72011 00:16:30.245 04:31:33 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:30.245 04:31:33 -- host/digest.sh@54 -- # local rw bs qd 00:16:30.245 04:31:33 -- host/digest.sh@56 -- # rw=randwrite 00:16:30.245 04:31:33 -- host/digest.sh@56 -- # bs=131072 00:16:30.245 04:31:33 -- host/digest.sh@56 -- # qd=16 00:16:30.245 04:31:33 -- host/digest.sh@58 -- # bperfpid=72067 00:16:30.245 04:31:33 -- host/digest.sh@60 -- # waitforlisten 72067 /var/tmp/bperf.sock 00:16:30.245 04:31:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:30.245 04:31:33 -- common/autotest_common.sh@829 -- # '[' -z 72067 ']' 00:16:30.245 04:31:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:30.245 04:31:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.245 04:31:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:30.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:30.245 04:31:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.245 04:31:33 -- common/autotest_common.sh@10 -- # set +x 00:16:30.245 [2024-12-07 04:31:33.317742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:30.245 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:30.245 Zero copy mechanism will not be used. 00:16:30.245 [2024-12-07 04:31:33.317823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72067 ] 00:16:30.245 [2024-12-07 04:31:33.456834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.503 [2024-12-07 04:31:33.512130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.070 04:31:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.070 04:31:34 -- common/autotest_common.sh@862 -- # return 0 00:16:31.070 04:31:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:31.070 04:31:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:31.328 04:31:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:31.328 04:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.328 04:31:34 -- common/autotest_common.sh@10 -- # set +x 00:16:31.328 04:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.328 04:31:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.328 04:31:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.586 nvme0n1 00:16:31.845 04:31:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:31.845 04:31:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.845 04:31:34 -- common/autotest_common.sh@10 -- # set +x 00:16:31.845 04:31:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.845 04:31:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:31.845 04:31:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:31.845 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:31.845 Zero copy mechanism will not be used. 00:16:31.845 Running I/O for 2 seconds... 00:16:31.845 [2024-12-07 04:31:34.947247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.947645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.947699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.952547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.952913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.952956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.957713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.958062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.958108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.962724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.963047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.963084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.967698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.968069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.968108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.972622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.972995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.973034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.977709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.978061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.978107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.982507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.982898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.982936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.987476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.987878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.987916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.992736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.993111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.993150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:34.997580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:34.997981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:34.998020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:35.002404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:35.002769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.845 [2024-12-07 04:31:35.002825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.845 [2024-12-07 04:31:35.007188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.845 [2024-12-07 04:31:35.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.007593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.012118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.012465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.012508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.017007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.017342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.017380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.022116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.022482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.022520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.027123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.027506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.027546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.032090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.032440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.032481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.037076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.037455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.037494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.042426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.042817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.042858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.047505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.047919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.047958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.052658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.053070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.053110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.057694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.058042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.058080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.062421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.062800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.062847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.067144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.067528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.067566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.072043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.072383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.072418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.076894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.077243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.077283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.846 [2024-12-07 04:31:35.082128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:31.846 [2024-12-07 04:31:35.082482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.846 [2024-12-07 04:31:35.082520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.087203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.087558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.087597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.092051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.092395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.092441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.096861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.097217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.097257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.101650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.102022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.102059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.106365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.106724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.106760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.111250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.111615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.111665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.116144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.116496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.116535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.120903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.121256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.121300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.125603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.125969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.126015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.130383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.130746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.130804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.135468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.135863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.135902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.140777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.105 [2024-12-07 04:31:35.141167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.105 [2024-12-07 04:31:35.141205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.105 [2024-12-07 04:31:35.145676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.146026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.146064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.150382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.150743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.150796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.155273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.155631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.155681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.160129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.160486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.160525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.165127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.165479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.165517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.169936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.170291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.170329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.174723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.175102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.175141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.179395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.179715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.179757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.184135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.184484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.184522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.188870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.189228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.189266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.193761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.194113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.194151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.198516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.198895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.198934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.203314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.203682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.203720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.208072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.208423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.208460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.212861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.213215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.213253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.217683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.218035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.218073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.222403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.222780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.222824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.227214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.227575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.227614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.232027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.232394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.232430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.236828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.237179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.237217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.241588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.241966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.242003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.246344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.246692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.246734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.251193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.251549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.251587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.255996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.256366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.256403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.260801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.261155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.261194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.265569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.265946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.265983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.270319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.270667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.270724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.275185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.275564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.275603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.279978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.280310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.280341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.284777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.285127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.285165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.289534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.289895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.289932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.294323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.294673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.294745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.299900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.300289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.300327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.305105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.305510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.305549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.310318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.310671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.310715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.315463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.315793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.315828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.320419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.320789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.325392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.325776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.325816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.330278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.330631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.330679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.335000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.335350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.335412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.106 [2024-12-07 04:31:35.340174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.106 [2024-12-07 04:31:35.340534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.106 [2024-12-07 04:31:35.340566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.345526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.345888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.345921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.350706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.351065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.351104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.356053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.356440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.356478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.361417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.361764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.361817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.366663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.367168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.367215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.372103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.372384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.372410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.377107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.377385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.377411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.382145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.382441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.382467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.387111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.387434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.387463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.391920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.392221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.392247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.397162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.370 [2024-12-07 04:31:35.397484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.370 [2024-12-07 04:31:35.397541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.370 [2024-12-07 04:31:35.402261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.402745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.402808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.407236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.407568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.407597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.412121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.412397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.412423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.417023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.417303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.417329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.421811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.422118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.422143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.426617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.427118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.427151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.431809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.432095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.432122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.436546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.436876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.436907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.441471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.441833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.441862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.446485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.447000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.447048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.451518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.451897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.451928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.456412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.456705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.456741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.461210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.461490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.461515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.466025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.466334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.466360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.470776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.471054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.471080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.475524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.475889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.475916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.480390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.480691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.480727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.485172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.485450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.485476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.489844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.490144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.490184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.494618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.495176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.495222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.499763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.500044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.500069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.504415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.504708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.504743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.509491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.509871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.509903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.514783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.515107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.515164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.520077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.520403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.520428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.525328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.525623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.525673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.530617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.371 [2024-12-07 04:31:35.531115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.371 [2024-12-07 04:31:35.531148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.371 [2024-12-07 04:31:35.536240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.536520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.536545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.541503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.541871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.541902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.546807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.547128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.547200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.552246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.552526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.552551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.557346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.557623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.557672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.562702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.563047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.563075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.568039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.568358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.568384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.573325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.573604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.573630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.578646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.579128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.579191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.584036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.584370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.584396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.589215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.589492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.589519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.594333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.594825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.594873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.599566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.599913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.599941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.372 [2024-12-07 04:31:35.604734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.372 [2024-12-07 04:31:35.605139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.372 [2024-12-07 04:31:35.605218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.610077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.610355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.610381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.615122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.615432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.615461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.619913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.620196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.620221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.624754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.625057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.625084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.629450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.629774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.629801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.634235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.634706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.634756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.639224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.639546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.639574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.644177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.644454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.644480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.648925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.649228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.649254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.654129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.654584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.654616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.659457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.659821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.659853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.632 [2024-12-07 04:31:35.664281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.632 [2024-12-07 04:31:35.664561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.632 [2024-12-07 04:31:35.664587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.669107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.669403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.669428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.674011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.674311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.674337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.678986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.679270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.679297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.683940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.684218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.684243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.688756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.689060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.689086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.693503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.694023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.694072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.698587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.698892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.698918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.703313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.703661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.703732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.708211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.708488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.708515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.713086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.713379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.713404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.717903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.718201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.718227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.722608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.722924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.722950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.727353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.727785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.727811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.732199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.732476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.732502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.737029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.737327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.737353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.741883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.742183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.742209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.746736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.747025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.747052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.751485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.751853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.751881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.756444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.756794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.756826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.761711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.762217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.762264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.767318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.767683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.767712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.772741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.773147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.773203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.778010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.778298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.778323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.783220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.783559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.783589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.788382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.788676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.788728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.793485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.633 [2024-12-07 04:31:35.793976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.633 [2024-12-07 04:31:35.794023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.633 [2024-12-07 04:31:35.798537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.798891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.798922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.803427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.803790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.803839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.808439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.808765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.808797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.813550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.814064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.814125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.818723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.819011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.819037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.823617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.823991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.824135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.828698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.828982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.829009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.833996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.834313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.834339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.839116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.839461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.839490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.844174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.844493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.844519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.849050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.849337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.849363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.853993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.854301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.854327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.858872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.859167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.859194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.634 [2024-12-07 04:31:35.863724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.634 [2024-12-07 04:31:35.864063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.634 [2024-12-07 04:31:35.864090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.869078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.869425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.869454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.874362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.874692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.874730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.879759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.880105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.880149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.885089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.885631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.885706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.890608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.890989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.891022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.895888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.896218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.896244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.900994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.901325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.901352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.906216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.906503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.906530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.911808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.912116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.912142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.916929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.917245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.917271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.921863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.922166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.922192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.926774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.927090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.927117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.931841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.932136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.932162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.936793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.937098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.937139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.941789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.942092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.942117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.946678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.946969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.946995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.951325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.951664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.951700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.956297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.956573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.956603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.961236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.961515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.961541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.966261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.966740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.895 [2024-12-07 04:31:35.966787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.895 [2024-12-07 04:31:35.971170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.895 [2024-12-07 04:31:35.971490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:35.971526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:35.975987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:35.976267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:35.976293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:35.980776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:35.981061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:35.981087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:35.985543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:35.985892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:35.985924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:35.990518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:35.991032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:35.991094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:35.995878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:35.996216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:35.996243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.001005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.001385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.001413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.006374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.006855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.006888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.011885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.012227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.012254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.017076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.017378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.017404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.022117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.022615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.022659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.027310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.027645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.027685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.032515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.032871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.032904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.037568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.038067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.038114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.042870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.043194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.043220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.048107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.048408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.048435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.053176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.053655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.053698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.058463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.058790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.058817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.063474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.063870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.063902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.068646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.069148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.069195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.073815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.074149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.074176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.078959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.079267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.079293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.083887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.084176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.084202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.088943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.089250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.089276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.896 [2024-12-07 04:31:36.093879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.896 [2024-12-07 04:31:36.094184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.896 [2024-12-07 04:31:36.094211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.897 [2024-12-07 04:31:36.099013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.897 [2024-12-07 04:31:36.099320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.897 [2024-12-07 04:31:36.099347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.897 [2024-12-07 04:31:36.103980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.897 [2024-12-07 04:31:36.104265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.897 [2024-12-07 04:31:36.104291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.897 [2024-12-07 04:31:36.108993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.897 [2024-12-07 04:31:36.109317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.897 [2024-12-07 04:31:36.109346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.897 [2024-12-07 04:31:36.113946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.897 [2024-12-07 04:31:36.114249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.897 [2024-12-07 04:31:36.114275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.897 [2024-12-07 04:31:36.118877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.897 [2024-12-07 04:31:36.119163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.897 [2024-12-07 04:31:36.119189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.897 [2024-12-07 04:31:36.123875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.897 [2024-12-07 04:31:36.124161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.897 [2024-12-07 04:31:36.124188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.897 [2024-12-07 04:31:36.128913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:32.897 [2024-12-07 04:31:36.129216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.897 [2024-12-07 04:31:36.129244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.157 [2024-12-07 04:31:36.134142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.157 [2024-12-07 04:31:36.134598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.157 [2024-12-07 04:31:36.134630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.157 [2024-12-07 04:31:36.139585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.157 [2024-12-07 04:31:36.139943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.157 [2024-12-07 04:31:36.139970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.157 [2024-12-07 04:31:36.144547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.157 [2024-12-07 04:31:36.144876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.157 [2024-12-07 04:31:36.144906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.157 [2024-12-07 04:31:36.149286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.157 [2024-12-07 04:31:36.149809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.149841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.154400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.154680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.154717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.159107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.159430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.159459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.163970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.164249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.164275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.169039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.169378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.169405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.174435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.174794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.174820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.179188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.179508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.179537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.184011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.184288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.184314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.188785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.189064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.189089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.193476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.194001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.194034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.198591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.198943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.198973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.203425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.203785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.203812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.208398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.208701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.208735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.213363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.213867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.213900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.218482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.218810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.218841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.223210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.223530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.223558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.228094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.228373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.228399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.232894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.233204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.233229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.237621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.238029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.238069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.242633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.242991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.243055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.247468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.247854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.247887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.252372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.252880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.252914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.257815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.258151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.258196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.263004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.263328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.263362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.268298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.268775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.268835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.273297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.273582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.273608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.278070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.278348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.158 [2024-12-07 04:31:36.278373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.158 [2024-12-07 04:31:36.282777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.158 [2024-12-07 04:31:36.283067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.283093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.287585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.287942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.287973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.292365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.292835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.292866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.297290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.297570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.297596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.302117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.302401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.302427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.306913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.307213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.307239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.311775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.312053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.312079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.316586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.317086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.317133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.321631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.321994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.322032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.326430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.326763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.326807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.331268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.331590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.331618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.336036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.336333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.336358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.340847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.341126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.341152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.345604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.345960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.346025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.350370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.350653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.350691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.355290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.355614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.355650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.360203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.360687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.360734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.365442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.365761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.370679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.371020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.371068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.375725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.376078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.376105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.381100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.381381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.381407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.386201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.386481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.386507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.159 [2024-12-07 04:31:36.391619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.159 [2024-12-07 04:31:36.392028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.159 [2024-12-07 04:31:36.392054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.420 [2024-12-07 04:31:36.397139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.420 [2024-12-07 04:31:36.397418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.420 [2024-12-07 04:31:36.397444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.420 [2024-12-07 04:31:36.402450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.420 [2024-12-07 04:31:36.402791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.420 [2024-12-07 04:31:36.402819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.420 [2024-12-07 04:31:36.407326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.420 [2024-12-07 04:31:36.407656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.420 [2024-12-07 04:31:36.407696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.420 [2024-12-07 04:31:36.412393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.420 [2024-12-07 04:31:36.412938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.412970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.417722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.418033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.418062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.423014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.423327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.423354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.428179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.428697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.428744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.433840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.434159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.434185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.438530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.438857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.438884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.443279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.443619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.443658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.448150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.448434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.448460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.453068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.453348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.453374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.457728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.458008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.458034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.462422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.462729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.462751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.467295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.467611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.467649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.472114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.472412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.472438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.476928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.477209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.477235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.481696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.481984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.482010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.486263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.486540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.486565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.491053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.491353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.491405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.495825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.496114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.496139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.500573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.501076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.501124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.505519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.505814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.505840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.510244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.510526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.510552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.515617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.515971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.516057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.520966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.521457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.521613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.526421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.421 [2024-12-07 04:31:36.526969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.421 [2024-12-07 04:31:36.527243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.421 [2024-12-07 04:31:36.532303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.532809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.532986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.537628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.538171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.538325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.543043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.543541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.543740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.548388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.548888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.549063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.553660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.554138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.554296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.558870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.559336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.559551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.564236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.564758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.569300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.569587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.569614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.574060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.574353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.574379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.578797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.579078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.579103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.583703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.584003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.584043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.588363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.588856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.588903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.593396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.593702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.593729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.598181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.598461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.598487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.603006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.603292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.603319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.607723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.608061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.608087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.612543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.613054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.613115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.617464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.617790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.617817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.622305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.622586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.622612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.627067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.627344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.627394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.631905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.632209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.632236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.636667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.636983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.637011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.641308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.641587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.641612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.646075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.646366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.422 [2024-12-07 04:31:36.646391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.422 [2024-12-07 04:31:36.650753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.422 [2024-12-07 04:31:36.651031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.423 [2024-12-07 04:31:36.651056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.423 [2024-12-07 04:31:36.655900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.423 [2024-12-07 04:31:36.656233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.423 [2024-12-07 04:31:36.656274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.660998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.661276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.661302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.666086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.666404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.666430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.670875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.671153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.671178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.675572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.675952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.676051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.680653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.680944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.685490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.685890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.685921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.690736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.691250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.691283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.696027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.696321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.696346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.700901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.701197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.701222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.705698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.706011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.706037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.710402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.710889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.710936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.715334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.715686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.715738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.720333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.720629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.720666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.725635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.726016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.726050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.731014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.731352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.731404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.736353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.736628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.736663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.741752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.742092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.742152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.747179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.747504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.747532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.752436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.752730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.752799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.757853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.758206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.758231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.763102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.763463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.763492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.768498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.768867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.768899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.773854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.774208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.774234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.779295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.779636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.779673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.784557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.684 [2024-12-07 04:31:36.784935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.684 [2024-12-07 04:31:36.784968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.684 [2024-12-07 04:31:36.789929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.790272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.790298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.795404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.795749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.795803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.800663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.801058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.801087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.805873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.806220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.806245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.810957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.811272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.811299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.815900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.816193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.816219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.820583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.820933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.820964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.825414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.825908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.825956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.830445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.830755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.830781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.835245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.835588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.835623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.840159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.840453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.840479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.844882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.845181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.845207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.849662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.849960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.849986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.854419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.854737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.854764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.859122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.859425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.859452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.863879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.864173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.864199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.868629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.868987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.869050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.873500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.873800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.873827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.878257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.878533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.878559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.882958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.883234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.883260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.887801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.888108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.888133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.892842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.893174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.893200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.897953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.898311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.898337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.903271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.903745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.903778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.908904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.909251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.909277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.914200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.914484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.914511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.685 [2024-12-07 04:31:36.919705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.685 [2024-12-07 04:31:36.920066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.685 [2024-12-07 04:31:36.920107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.944 [2024-12-07 04:31:36.924899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.944 [2024-12-07 04:31:36.925198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.944 [2024-12-07 04:31:36.925225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.944 [2024-12-07 04:31:36.930104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.944 [2024-12-07 04:31:36.930406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.944 [2024-12-07 04:31:36.930431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.945 [2024-12-07 04:31:36.935210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.945 [2024-12-07 04:31:36.935718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.945 [2024-12-07 04:31:36.935752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.945 [2024-12-07 04:31:36.940833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f60f60) with pdu=0x2000190fef90 00:16:33.945 [2024-12-07 04:31:36.941252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.945 [2024-12-07 04:31:36.941291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.945 00:16:33.945 Latency(us) 00:16:33.945 [2024-12-07T04:31:37.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.945 [2024-12-07T04:31:37.185Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:33.945 nvme0n1 : 2.00 6185.99 773.25 0.00 0.00 2581.32 1578.82 5808.87 00:16:33.945 [2024-12-07T04:31:37.185Z] =================================================================================================================== 00:16:33.945 [2024-12-07T04:31:37.185Z] Total : 6185.99 773.25 0.00 0.00 2581.32 1578.82 5808.87 00:16:33.945 0 00:16:33.945 04:31:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:33.945 04:31:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:33.945 04:31:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:33.945 04:31:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:33.945 | .driver_specific 00:16:33.945 | .nvme_error 00:16:33.945 | .status_code 00:16:33.945 | .command_transient_transport_error' 00:16:34.204 04:31:37 -- host/digest.sh@71 -- # (( 399 > 0 )) 00:16:34.204 04:31:37 -- host/digest.sh@73 -- # killprocess 72067 00:16:34.204 04:31:37 -- common/autotest_common.sh@936 -- # '[' -z 72067 ']' 00:16:34.204 04:31:37 -- common/autotest_common.sh@940 -- # kill -0 72067 00:16:34.204 04:31:37 -- common/autotest_common.sh@941 -- # uname 00:16:34.204 04:31:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.204 04:31:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72067 00:16:34.204 killing process with pid 72067 00:16:34.204 Received shutdown signal, test time was about 2.000000 seconds 00:16:34.204 00:16:34.204 Latency(us) 00:16:34.204 [2024-12-07T04:31:37.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.204 [2024-12-07T04:31:37.444Z] =================================================================================================================== 00:16:34.204 [2024-12-07T04:31:37.444Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.204 04:31:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:34.204 04:31:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:34.204 04:31:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72067' 00:16:34.204 04:31:37 -- common/autotest_common.sh@955 -- # kill 72067 00:16:34.204 04:31:37 -- common/autotest_common.sh@960 -- # wait 72067 00:16:34.464 04:31:37 -- host/digest.sh@115 -- # killprocess 71863 00:16:34.464 04:31:37 -- common/autotest_common.sh@936 -- # '[' -z 71863 ']' 00:16:34.464 04:31:37 -- common/autotest_common.sh@940 -- # kill -0 71863 00:16:34.464 04:31:37 -- common/autotest_common.sh@941 -- # uname 00:16:34.464 04:31:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.464 04:31:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71863 00:16:34.464 04:31:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.464 04:31:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.464 04:31:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71863' 00:16:34.464 killing process with pid 71863 00:16:34.464 04:31:37 -- common/autotest_common.sh@955 -- # kill 71863 00:16:34.464 04:31:37 -- common/autotest_common.sh@960 -- # wait 71863 00:16:34.464 00:16:34.464 real 0m18.038s 00:16:34.464 user 0m35.094s 00:16:34.464 sys 0m4.511s 00:16:34.464 04:31:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:34.464 ************************************ 00:16:34.464 END TEST nvmf_digest_error 00:16:34.464 ************************************ 00:16:34.464 04:31:37 -- common/autotest_common.sh@10 -- # set +x 00:16:34.723 04:31:37 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:16:34.723 04:31:37 -- host/digest.sh@139 -- # nvmftestfini 00:16:34.723 04:31:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:34.723 04:31:37 -- nvmf/common.sh@116 -- # sync 00:16:34.723 04:31:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:34.723 04:31:37 -- nvmf/common.sh@119 -- # set +e 00:16:34.723 04:31:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:34.723 04:31:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:34.723 rmmod nvme_tcp 00:16:34.723 rmmod nvme_fabrics 00:16:34.723 rmmod nvme_keyring 00:16:34.723 04:31:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:34.723 04:31:37 -- nvmf/common.sh@123 -- # set -e 00:16:34.723 04:31:37 -- nvmf/common.sh@124 -- # return 0 00:16:34.723 04:31:37 -- nvmf/common.sh@477 -- # '[' -n 71863 ']' 00:16:34.723 Process with pid 71863 is not found 00:16:34.723 04:31:37 -- nvmf/common.sh@478 -- # killprocess 71863 00:16:34.723 04:31:37 -- common/autotest_common.sh@936 -- # '[' -z 71863 ']' 00:16:34.723 04:31:37 -- common/autotest_common.sh@940 -- # kill -0 71863 00:16:34.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (71863) - No such process 00:16:34.723 04:31:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 71863 is not found' 00:16:34.723 04:31:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:34.723 04:31:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:34.723 04:31:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:34.723 04:31:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.723 04:31:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:34.723 04:31:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.723 04:31:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.723 04:31:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.723 04:31:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:34.723 00:16:34.723 real 0m35.039s 00:16:34.723 user 1m6.510s 00:16:34.723 sys 0m9.158s 00:16:34.723 04:31:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:34.723 ************************************ 00:16:34.723 END TEST nvmf_digest 00:16:34.723 ************************************ 00:16:34.723 04:31:37 -- common/autotest_common.sh@10 -- # set +x 00:16:34.723 04:31:37 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:16:34.723 04:31:37 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:16:34.723 04:31:37 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:34.723 04:31:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:34.723 04:31:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.723 04:31:37 -- common/autotest_common.sh@10 -- # set +x 00:16:34.723 ************************************ 00:16:34.723 START TEST nvmf_multipath 00:16:34.723 ************************************ 00:16:34.723 04:31:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:34.983 * Looking for test storage... 00:16:34.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:34.983 04:31:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:34.983 04:31:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:34.983 04:31:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:34.983 04:31:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:34.983 04:31:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:34.983 04:31:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:34.983 04:31:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:34.983 04:31:38 -- scripts/common.sh@335 -- # IFS=.-: 00:16:34.983 04:31:38 -- scripts/common.sh@335 -- # read -ra ver1 00:16:34.983 04:31:38 -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.983 04:31:38 -- scripts/common.sh@336 -- # read -ra ver2 00:16:34.983 04:31:38 -- scripts/common.sh@337 -- # local 'op=<' 00:16:34.983 04:31:38 -- scripts/common.sh@339 -- # ver1_l=2 00:16:34.983 04:31:38 -- scripts/common.sh@340 -- # ver2_l=1 00:16:34.983 04:31:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:34.983 04:31:38 -- scripts/common.sh@343 -- # case "$op" in 00:16:34.983 04:31:38 -- scripts/common.sh@344 -- # : 1 00:16:34.983 04:31:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:34.983 04:31:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.983 04:31:38 -- scripts/common.sh@364 -- # decimal 1 00:16:34.983 04:31:38 -- scripts/common.sh@352 -- # local d=1 00:16:34.983 04:31:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.983 04:31:38 -- scripts/common.sh@354 -- # echo 1 00:16:34.983 04:31:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:34.983 04:31:38 -- scripts/common.sh@365 -- # decimal 2 00:16:34.983 04:31:38 -- scripts/common.sh@352 -- # local d=2 00:16:34.983 04:31:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.983 04:31:38 -- scripts/common.sh@354 -- # echo 2 00:16:34.983 04:31:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:34.983 04:31:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:34.983 04:31:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:34.983 04:31:38 -- scripts/common.sh@367 -- # return 0 00:16:34.983 04:31:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.983 04:31:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.983 --rc genhtml_branch_coverage=1 00:16:34.983 --rc genhtml_function_coverage=1 00:16:34.983 --rc genhtml_legend=1 00:16:34.983 --rc geninfo_all_blocks=1 00:16:34.983 --rc geninfo_unexecuted_blocks=1 00:16:34.983 00:16:34.983 ' 00:16:34.983 04:31:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.983 --rc genhtml_branch_coverage=1 00:16:34.983 --rc genhtml_function_coverage=1 00:16:34.983 --rc genhtml_legend=1 00:16:34.983 --rc geninfo_all_blocks=1 00:16:34.983 --rc geninfo_unexecuted_blocks=1 00:16:34.983 00:16:34.983 ' 00:16:34.983 04:31:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.983 --rc genhtml_branch_coverage=1 00:16:34.983 --rc genhtml_function_coverage=1 00:16:34.983 --rc genhtml_legend=1 00:16:34.983 --rc geninfo_all_blocks=1 00:16:34.983 --rc geninfo_unexecuted_blocks=1 00:16:34.983 00:16:34.983 ' 00:16:34.983 04:31:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.983 --rc genhtml_branch_coverage=1 00:16:34.983 --rc genhtml_function_coverage=1 00:16:34.983 --rc genhtml_legend=1 00:16:34.983 --rc geninfo_all_blocks=1 00:16:34.983 --rc geninfo_unexecuted_blocks=1 00:16:34.983 00:16:34.983 ' 00:16:34.983 04:31:38 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.983 04:31:38 -- nvmf/common.sh@7 -- # uname -s 00:16:34.983 04:31:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.983 04:31:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.983 04:31:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.983 04:31:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.983 04:31:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.983 04:31:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.983 04:31:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.983 04:31:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.983 04:31:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.983 04:31:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.983 04:31:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:16:34.983 04:31:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:16:34.983 04:31:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.983 04:31:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.983 04:31:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.983 04:31:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.983 04:31:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.983 04:31:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.983 04:31:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.983 04:31:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.983 04:31:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.983 04:31:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.983 04:31:38 -- paths/export.sh@5 -- # export PATH 00:16:34.983 04:31:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.983 04:31:38 -- nvmf/common.sh@46 -- # : 0 00:16:34.983 04:31:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:34.983 04:31:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:34.983 04:31:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:34.983 04:31:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.983 04:31:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.983 04:31:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:34.984 04:31:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:34.984 04:31:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:34.984 04:31:38 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.984 04:31:38 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.984 04:31:38 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.984 04:31:38 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:34.984 04:31:38 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.984 04:31:38 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:34.984 04:31:38 -- host/multipath.sh@30 -- # nvmftestinit 00:16:34.984 04:31:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:34.984 04:31:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.984 04:31:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:34.984 04:31:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:34.984 04:31:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:34.984 04:31:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.984 04:31:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.984 04:31:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.984 04:31:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:34.984 04:31:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:34.984 04:31:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:34.984 04:31:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:34.984 04:31:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:34.984 04:31:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:34.984 04:31:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.984 04:31:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.984 04:31:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:34.984 04:31:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:34.984 04:31:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.984 04:31:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.984 04:31:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.984 04:31:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.984 04:31:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.984 04:31:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.984 04:31:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.984 04:31:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.984 04:31:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:34.984 04:31:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:34.984 Cannot find device "nvmf_tgt_br" 00:16:34.984 04:31:38 -- nvmf/common.sh@154 -- # true 00:16:34.984 04:31:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.984 Cannot find device "nvmf_tgt_br2" 00:16:34.984 04:31:38 -- nvmf/common.sh@155 -- # true 00:16:34.984 04:31:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:34.984 04:31:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:34.984 Cannot find device "nvmf_tgt_br" 00:16:34.984 04:31:38 -- nvmf/common.sh@157 -- # true 00:16:34.984 04:31:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:34.984 Cannot find device "nvmf_tgt_br2" 00:16:34.984 04:31:38 -- nvmf/common.sh@158 -- # true 00:16:34.984 04:31:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:35.245 04:31:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:35.245 04:31:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.245 04:31:38 -- nvmf/common.sh@161 -- # true 00:16:35.245 04:31:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.245 04:31:38 -- nvmf/common.sh@162 -- # true 00:16:35.245 04:31:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.245 04:31:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.245 04:31:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.245 04:31:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.245 04:31:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.245 04:31:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.245 04:31:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.245 04:31:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.245 04:31:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.245 04:31:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:35.245 04:31:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:35.245 04:31:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:35.245 04:31:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:35.245 04:31:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.245 04:31:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.245 04:31:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.245 04:31:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:35.245 04:31:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:35.245 04:31:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.245 04:31:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.245 04:31:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.245 04:31:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.245 04:31:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.245 04:31:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:35.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:16:35.245 00:16:35.245 --- 10.0.0.2 ping statistics --- 00:16:35.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.245 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:35.245 04:31:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:35.245 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.245 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:35.245 00:16:35.245 --- 10.0.0.3 ping statistics --- 00:16:35.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.245 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:35.245 04:31:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:35.245 00:16:35.245 --- 10.0.0.1 ping statistics --- 00:16:35.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.245 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:35.245 04:31:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.245 04:31:38 -- nvmf/common.sh@421 -- # return 0 00:16:35.245 04:31:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.245 04:31:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.245 04:31:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.245 04:31:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.245 04:31:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.245 04:31:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.245 04:31:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.245 04:31:38 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:35.245 04:31:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:35.245 04:31:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.245 04:31:38 -- common/autotest_common.sh@10 -- # set +x 00:16:35.245 04:31:38 -- nvmf/common.sh@469 -- # nvmfpid=72350 00:16:35.245 04:31:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:35.245 04:31:38 -- nvmf/common.sh@470 -- # waitforlisten 72350 00:16:35.245 04:31:38 -- common/autotest_common.sh@829 -- # '[' -z 72350 ']' 00:16:35.245 04:31:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.245 04:31:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.245 04:31:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.245 04:31:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.245 04:31:38 -- common/autotest_common.sh@10 -- # set +x 00:16:35.505 [2024-12-07 04:31:38.514038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:35.505 [2024-12-07 04:31:38.514136] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.505 [2024-12-07 04:31:38.652323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.505 [2024-12-07 04:31:38.704846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:35.505 [2024-12-07 04:31:38.705234] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.505 [2024-12-07 04:31:38.705346] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.505 [2024-12-07 04:31:38.705530] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.505 [2024-12-07 04:31:38.705773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.505 [2024-12-07 04:31:38.705782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.441 04:31:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.441 04:31:39 -- common/autotest_common.sh@862 -- # return 0 00:16:36.441 04:31:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:36.441 04:31:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.441 04:31:39 -- common/autotest_common.sh@10 -- # set +x 00:16:36.441 04:31:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.441 04:31:39 -- host/multipath.sh@33 -- # nvmfapp_pid=72350 00:16:36.441 04:31:39 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.701 [2024-12-07 04:31:39.772014] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.701 04:31:39 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:36.959 Malloc0 00:16:36.959 04:31:40 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:37.218 04:31:40 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.477 04:31:40 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.736 [2024-12-07 04:31:40.851318] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.736 04:31:40 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:37.994 [2024-12-07 04:31:41.075488] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:37.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.994 04:31:41 -- host/multipath.sh@44 -- # bdevperf_pid=72402 00:16:37.994 04:31:41 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:37.994 04:31:41 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:37.994 04:31:41 -- host/multipath.sh@47 -- # waitforlisten 72402 /var/tmp/bdevperf.sock 00:16:37.994 04:31:41 -- common/autotest_common.sh@829 -- # '[' -z 72402 ']' 00:16:37.994 04:31:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.994 04:31:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.994 04:31:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.994 04:31:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.994 04:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:38.931 04:31:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.931 04:31:42 -- common/autotest_common.sh@862 -- # return 0 00:16:38.931 04:31:42 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:39.190 04:31:42 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:39.448 Nvme0n1 00:16:39.448 04:31:42 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:40.016 Nvme0n1 00:16:40.016 04:31:43 -- host/multipath.sh@78 -- # sleep 1 00:16:40.016 04:31:43 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:40.952 04:31:44 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:40.952 04:31:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:41.211 04:31:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:41.470 04:31:44 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:41.470 04:31:44 -- host/multipath.sh@65 -- # dtrace_pid=72447 00:16:41.470 04:31:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72350 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:41.470 04:31:44 -- host/multipath.sh@66 -- # sleep 6 00:16:48.032 04:31:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:48.032 04:31:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:48.032 04:31:50 -- host/multipath.sh@67 -- # active_port=4421 00:16:48.032 04:31:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:48.032 Attaching 4 probes... 00:16:48.032 @path[10.0.0.2, 4421]: 19987 00:16:48.032 @path[10.0.0.2, 4421]: 20060 00:16:48.032 @path[10.0.0.2, 4421]: 20043 00:16:48.032 @path[10.0.0.2, 4421]: 20112 00:16:48.032 @path[10.0.0.2, 4421]: 19806 00:16:48.032 04:31:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:48.032 04:31:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:48.032 04:31:50 -- host/multipath.sh@69 -- # sed -n 1p 00:16:48.032 04:31:50 -- host/multipath.sh@69 -- # port=4421 00:16:48.032 04:31:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:48.032 04:31:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:48.032 04:31:50 -- host/multipath.sh@72 -- # kill 72447 00:16:48.032 04:31:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:48.032 04:31:50 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:48.032 04:31:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:48.032 04:31:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:48.292 04:31:51 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:48.292 04:31:51 -- host/multipath.sh@65 -- # dtrace_pid=72569 00:16:48.292 04:31:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72350 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:48.292 04:31:51 -- host/multipath.sh@66 -- # sleep 6 00:16:54.901 04:31:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:54.901 04:31:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:54.901 04:31:57 -- host/multipath.sh@67 -- # active_port=4420 00:16:54.901 04:31:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.901 Attaching 4 probes... 00:16:54.901 @path[10.0.0.2, 4420]: 19958 00:16:54.901 @path[10.0.0.2, 4420]: 20419 00:16:54.901 @path[10.0.0.2, 4420]: 20064 00:16:54.901 @path[10.0.0.2, 4420]: 19950 00:16:54.901 @path[10.0.0.2, 4420]: 20196 00:16:54.901 04:31:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:54.901 04:31:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:54.901 04:31:57 -- host/multipath.sh@69 -- # sed -n 1p 00:16:54.901 04:31:57 -- host/multipath.sh@69 -- # port=4420 00:16:54.901 04:31:57 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:54.901 04:31:57 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:54.901 04:31:57 -- host/multipath.sh@72 -- # kill 72569 00:16:54.901 04:31:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.901 04:31:57 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:54.901 04:31:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:54.901 04:31:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:54.901 04:31:58 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:54.901 04:31:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72350 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:54.901 04:31:58 -- host/multipath.sh@65 -- # dtrace_pid=72687 00:16:54.901 04:31:58 -- host/multipath.sh@66 -- # sleep 6 00:17:01.470 04:32:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:01.470 04:32:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:01.470 04:32:04 -- host/multipath.sh@67 -- # active_port=4421 00:17:01.470 04:32:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:01.470 Attaching 4 probes... 00:17:01.470 @path[10.0.0.2, 4421]: 14300 00:17:01.470 @path[10.0.0.2, 4421]: 19628 00:17:01.470 @path[10.0.0.2, 4421]: 20360 00:17:01.470 @path[10.0.0.2, 4421]: 20325 00:17:01.470 @path[10.0.0.2, 4421]: 20048 00:17:01.470 04:32:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:01.470 04:32:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:01.470 04:32:04 -- host/multipath.sh@69 -- # sed -n 1p 00:17:01.470 04:32:04 -- host/multipath.sh@69 -- # port=4421 00:17:01.470 04:32:04 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:01.470 04:32:04 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:01.470 04:32:04 -- host/multipath.sh@72 -- # kill 72687 00:17:01.470 04:32:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:01.470 04:32:04 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:01.470 04:32:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:01.470 04:32:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:01.729 04:32:04 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:01.729 04:32:04 -- host/multipath.sh@65 -- # dtrace_pid=72800 00:17:01.729 04:32:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72350 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:01.729 04:32:04 -- host/multipath.sh@66 -- # sleep 6 00:17:08.318 04:32:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:08.318 04:32:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:08.318 04:32:11 -- host/multipath.sh@67 -- # active_port= 00:17:08.318 04:32:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.318 Attaching 4 probes... 00:17:08.318 00:17:08.318 00:17:08.318 00:17:08.318 00:17:08.318 00:17:08.318 04:32:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:08.318 04:32:11 -- host/multipath.sh@69 -- # sed -n 1p 00:17:08.318 04:32:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:08.318 04:32:11 -- host/multipath.sh@69 -- # port= 00:17:08.318 04:32:11 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:08.318 04:32:11 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:08.318 04:32:11 -- host/multipath.sh@72 -- # kill 72800 00:17:08.318 04:32:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.318 04:32:11 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:08.318 04:32:11 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:08.318 04:32:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:08.576 04:32:11 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:08.576 04:32:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72350 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:08.576 04:32:11 -- host/multipath.sh@65 -- # dtrace_pid=72917 00:17:08.576 04:32:11 -- host/multipath.sh@66 -- # sleep 6 00:17:15.156 04:32:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:15.156 04:32:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:15.156 04:32:18 -- host/multipath.sh@67 -- # active_port=4421 00:17:15.156 04:32:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.156 Attaching 4 probes... 00:17:15.156 @path[10.0.0.2, 4421]: 19186 00:17:15.156 @path[10.0.0.2, 4421]: 19512 00:17:15.156 @path[10.0.0.2, 4421]: 19746 00:17:15.156 @path[10.0.0.2, 4421]: 19616 00:17:15.156 @path[10.0.0.2, 4421]: 19559 00:17:15.156 04:32:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:15.156 04:32:18 -- host/multipath.sh@69 -- # sed -n 1p 00:17:15.156 04:32:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:15.156 04:32:18 -- host/multipath.sh@69 -- # port=4421 00:17:15.156 04:32:18 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:15.156 04:32:18 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:15.156 04:32:18 -- host/multipath.sh@72 -- # kill 72917 00:17:15.156 04:32:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.156 04:32:18 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:15.156 [2024-12-07 04:32:18.280833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.280999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 [2024-12-07 04:32:18.281174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f91230 is same with the state(5) to be set 00:17:15.156 04:32:18 -- host/multipath.sh@101 -- # sleep 1 00:17:16.093 04:32:19 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:16.093 04:32:19 -- host/multipath.sh@65 -- # dtrace_pid=73042 00:17:16.093 04:32:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72350 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:16.093 04:32:19 -- host/multipath.sh@66 -- # sleep 6 00:17:22.665 04:32:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:22.665 04:32:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:22.665 04:32:25 -- host/multipath.sh@67 -- # active_port=4420 00:17:22.665 04:32:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.665 Attaching 4 probes... 00:17:22.665 @path[10.0.0.2, 4420]: 18976 00:17:22.665 @path[10.0.0.2, 4420]: 19441 00:17:22.665 @path[10.0.0.2, 4420]: 19872 00:17:22.665 @path[10.0.0.2, 4420]: 19672 00:17:22.665 @path[10.0.0.2, 4420]: 19510 00:17:22.665 04:32:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:22.665 04:32:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:22.665 04:32:25 -- host/multipath.sh@69 -- # sed -n 1p 00:17:22.665 04:32:25 -- host/multipath.sh@69 -- # port=4420 00:17:22.665 04:32:25 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:22.665 04:32:25 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:22.665 04:32:25 -- host/multipath.sh@72 -- # kill 73042 00:17:22.665 04:32:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.665 04:32:25 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:22.665 [2024-12-07 04:32:25.841856] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:22.665 04:32:25 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:22.940 04:32:26 -- host/multipath.sh@111 -- # sleep 6 00:17:29.507 04:32:32 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:29.507 04:32:32 -- host/multipath.sh@65 -- # dtrace_pid=73216 00:17:29.507 04:32:32 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72350 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:29.507 04:32:32 -- host/multipath.sh@66 -- # sleep 6 00:17:36.081 04:32:38 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:36.081 04:32:38 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:36.081 04:32:38 -- host/multipath.sh@67 -- # active_port=4421 00:17:36.081 04:32:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:36.081 Attaching 4 probes... 00:17:36.081 @path[10.0.0.2, 4421]: 19227 00:17:36.081 @path[10.0.0.2, 4421]: 19363 00:17:36.081 @path[10.0.0.2, 4421]: 19316 00:17:36.081 @path[10.0.0.2, 4421]: 19322 00:17:36.081 @path[10.0.0.2, 4421]: 19278 00:17:36.081 04:32:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:36.081 04:32:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:36.081 04:32:38 -- host/multipath.sh@69 -- # sed -n 1p 00:17:36.081 04:32:38 -- host/multipath.sh@69 -- # port=4421 00:17:36.081 04:32:38 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:36.081 04:32:38 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:36.081 04:32:38 -- host/multipath.sh@72 -- # kill 73216 00:17:36.082 04:32:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:36.082 04:32:38 -- host/multipath.sh@114 -- # killprocess 72402 00:17:36.082 04:32:38 -- common/autotest_common.sh@936 -- # '[' -z 72402 ']' 00:17:36.082 04:32:38 -- common/autotest_common.sh@940 -- # kill -0 72402 00:17:36.082 04:32:38 -- common/autotest_common.sh@941 -- # uname 00:17:36.082 04:32:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.082 04:32:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72402 00:17:36.082 killing process with pid 72402 00:17:36.082 04:32:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:36.082 04:32:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:36.082 04:32:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72402' 00:17:36.082 04:32:38 -- common/autotest_common.sh@955 -- # kill 72402 00:17:36.082 04:32:38 -- common/autotest_common.sh@960 -- # wait 72402 00:17:36.082 Connection closed with partial response: 00:17:36.082 00:17:36.082 00:17:36.082 04:32:38 -- host/multipath.sh@116 -- # wait 72402 00:17:36.082 04:32:38 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:36.082 [2024-12-07 04:31:41.139096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.082 [2024-12-07 04:31:41.139226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72402 ] 00:17:36.082 [2024-12-07 04:31:41.275105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.082 [2024-12-07 04:31:41.342907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.082 Running I/O for 90 seconds... 00:17:36.082 [2024-12-07 04:31:51.294396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.294473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.294567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.294602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.294964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.294984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.294998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.295113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.295179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.295419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.295455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.295477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.295492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.296164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.296189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.296211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.082 [2024-12-07 04:31:51.296225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.296246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.082 [2024-12-07 04:31:51.296260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:36.082 [2024-12-07 04:31:51.296280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.296582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.296617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.296667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.296752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.296860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.296896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.296931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.296957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.296990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.297027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.297169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.083 [2024-12-07 04:31:51.297274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:36.083 [2024-12-07 04:31:51.297550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.083 [2024-12-07 04:31:51.297565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.297600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.297635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.297705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.297743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.297779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.297816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.297852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.297902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.297946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.297968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.297982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.298101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.298135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.298203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.298606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.298620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.300055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.300098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.300134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.300168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.300217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.084 [2024-12-07 04:31:51.300252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.300287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.084 [2024-12-07 04:31:51.300321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:36.084 [2024-12-07 04:31:51.300341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:51.300356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:51.300391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:51.300425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:51.300460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:51.300512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:51.300548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:51.300583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:51.300617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:51.300637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:51.300659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.837689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.837773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.837814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.837851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.837887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.837924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.837961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.837983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.837998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.838042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.838194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.838287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.838321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.838422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.085 [2024-12-07 04:31:57.838525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.085 [2024-12-07 04:31:57.838797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:36.085 [2024-12-07 04:31:57.838819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.838834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.838857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.838872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.838894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.838909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.838931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.838946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.838969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.838984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.086 [2024-12-07 04:31:57.839135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.086 [2024-12-07 04:31:57.839169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.086 [2024-12-07 04:31:57.839348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.086 [2024-12-07 04:31:57.839459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.086 [2024-12-07 04:31:57.839497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.086 [2024-12-07 04:31:57.839535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.086 [2024-12-07 04:31:57.839572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.839953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:36.086 [2024-12-07 04:31:57.839991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.086 [2024-12-07 04:31:57.840006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.840123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.840157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.840507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.840965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.840987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.841018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.841082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.841117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.841151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.841184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.841226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.841267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.841301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.841335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.087 [2024-12-07 04:31:57.841369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:36.087 [2024-12-07 04:31:57.841389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.087 [2024-12-07 04:31:57.841403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.841612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.841741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.841953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.841969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.842917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.842946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.842982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.088 [2024-12-07 04:31:57.843751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:31:57.843906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.088 [2024-12-07 04:31:57.843922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:36.088 [2024-12-07 04:32:04.874533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.874588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.874725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.874763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.874797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.874830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.874864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.874898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.874931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.874965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.874985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.875958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.875978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.875993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.876021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.089 [2024-12-07 04:32:04.876051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:36.089 [2024-12-07 04:32:04.876071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.089 [2024-12-07 04:32:04.876085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.876141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.876442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.876518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.876585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.876772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.876979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.876993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.877114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.877182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.877354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.090 [2024-12-07 04:32:04.877429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:36.090 [2024-12-07 04:32:04.877451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.090 [2024-12-07 04:32:04.877466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.877501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.877894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.877964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.877984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.877998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.878167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.878201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.878268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.878303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.878449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.091 [2024-12-07 04:32:04.878484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.091 [2024-12-07 04:32:04.878517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:36.091 [2024-12-07 04:32:04.878537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.092 [2024-12-07 04:32:04.878585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.878857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.878874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.879844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.879871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.879904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.092 [2024-12-07 04:32:04.879920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.879948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.879963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.879991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.092 [2024-12-07 04:32:04.880092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.092 [2024-12-07 04:32:04.880177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.092 [2024-12-07 04:32:04.880358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.092 [2024-12-07 04:32:04.880400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:04.880600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:04.880614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.092 [2024-12-07 04:32:18.281495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.092 [2024-12-07 04:32:18.281507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.281756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.281784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.281826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.281853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.281908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.281975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.281988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.282318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.282374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.282401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.093 [2024-12-07 04:32:18.282429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.093 [2024-12-07 04:32:18.282443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.093 [2024-12-07 04:32:18.282456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.282484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.282545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.282883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.282968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.282982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.282994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.094 [2024-12-07 04:32:18.283441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.094 [2024-12-07 04:32:18.283515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.094 [2024-12-07 04:32:18.283528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.283780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.283834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.283862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.283888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.283930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.283956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.283983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.283997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.284009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.284035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.284068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.284173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.284198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.284543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.095 [2024-12-07 04:32:18.284574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.095 [2024-12-07 04:32:18.284588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.095 [2024-12-07 04:32:18.284601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.096 [2024-12-07 04:32:18.284627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.096 [2024-12-07 04:32:18.284670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.096 [2024-12-07 04:32:18.284769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.096 [2024-12-07 04:32:18.284823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.284978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.284990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.285005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-12-07 04:32:18.285017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.285031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3020 is same with the state(5) to be set 00:17:36.096 [2024-12-07 04:32:18.285064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:36.096 [2024-12-07 04:32:18.285075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:36.096 [2024-12-07 04:32:18.285085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:131064 len:8 PRP1 0x0 PRP2 0x0 00:17:36.096 [2024-12-07 04:32:18.285097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.096 [2024-12-07 04:32:18.285142] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb3020 was disconnected and freed. reset controller. 00:17:36.096 [2024-12-07 04:32:18.286225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.096 [2024-12-07 04:32:18.286314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8db20 (9): Bad file descriptor 00:17:36.096 [2024-12-07 04:32:18.286610] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.096 [2024-12-07 04:32:18.286693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.096 [2024-12-07 04:32:18.286743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.096 [2024-12-07 04:32:18.286764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8db20 with addr=10.0.0.2, port=4421 00:17:36.096 [2024-12-07 04:32:18.286778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8db20 is same with the state(5) to be set 00:17:36.096 [2024-12-07 04:32:18.286808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8db20 (9): Bad file descriptor 00:17:36.096 [2024-12-07 04:32:18.286836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:36.096 [2024-12-07 04:32:18.286852] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:36.096 [2024-12-07 04:32:18.286866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:36.096 [2024-12-07 04:32:18.286895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.096 [2024-12-07 04:32:18.286910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.096 [2024-12-07 04:32:28.339783] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:36.096 Received shutdown signal, test time was about 55.288293 seconds 00:17:36.096 00:17:36.096 Latency(us) 00:17:36.096 [2024-12-07T04:32:39.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.096 [2024-12-07T04:32:39.336Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.096 Verification LBA range: start 0x0 length 0x4000 00:17:36.096 Nvme0n1 : 55.29 11224.94 43.85 0.00 0.00 11384.55 228.07 7046430.72 00:17:36.096 [2024-12-07T04:32:39.336Z] =================================================================================================================== 00:17:36.096 [2024-12-07T04:32:39.336Z] Total : 11224.94 43.85 0.00 0.00 11384.55 228.07 7046430.72 00:17:36.096 04:32:38 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.096 04:32:38 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:36.096 04:32:38 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:36.096 04:32:38 -- host/multipath.sh@125 -- # nvmftestfini 00:17:36.096 04:32:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:36.096 04:32:38 -- nvmf/common.sh@116 -- # sync 00:17:36.096 04:32:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:36.096 04:32:38 -- nvmf/common.sh@119 -- # set +e 00:17:36.096 04:32:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:36.096 04:32:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:36.096 rmmod nvme_tcp 00:17:36.096 rmmod nvme_fabrics 00:17:36.096 rmmod nvme_keyring 00:17:36.096 04:32:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:36.096 04:32:38 -- nvmf/common.sh@123 -- # set -e 00:17:36.096 04:32:38 -- nvmf/common.sh@124 -- # return 0 00:17:36.096 04:32:38 -- nvmf/common.sh@477 -- # '[' -n 72350 ']' 00:17:36.096 04:32:38 -- nvmf/common.sh@478 -- # killprocess 72350 00:17:36.096 04:32:38 -- common/autotest_common.sh@936 -- # '[' -z 72350 ']' 00:17:36.096 04:32:38 -- common/autotest_common.sh@940 -- # kill -0 72350 00:17:36.096 04:32:38 -- common/autotest_common.sh@941 -- # uname 00:17:36.096 04:32:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.096 04:32:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72350 00:17:36.096 04:32:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:36.096 killing process with pid 72350 00:17:36.096 04:32:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:36.096 04:32:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72350' 00:17:36.096 04:32:39 -- common/autotest_common.sh@955 -- # kill 72350 00:17:36.096 04:32:39 -- common/autotest_common.sh@960 -- # wait 72350 00:17:36.096 04:32:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:36.096 04:32:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:36.096 04:32:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:36.096 04:32:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:36.096 04:32:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:36.096 04:32:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.096 04:32:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.096 04:32:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.096 04:32:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:36.096 00:17:36.096 real 1m1.340s 00:17:36.096 user 2m50.117s 00:17:36.096 sys 0m18.077s 00:17:36.096 04:32:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:36.096 04:32:39 -- common/autotest_common.sh@10 -- # set +x 00:17:36.096 ************************************ 00:17:36.096 END TEST nvmf_multipath 00:17:36.097 ************************************ 00:17:36.097 04:32:39 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:36.097 04:32:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:36.097 04:32:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.097 04:32:39 -- common/autotest_common.sh@10 -- # set +x 00:17:36.097 ************************************ 00:17:36.097 START TEST nvmf_timeout 00:17:36.097 ************************************ 00:17:36.097 04:32:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:36.357 * Looking for test storage... 00:17:36.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:36.357 04:32:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:36.357 04:32:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:36.357 04:32:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:36.357 04:32:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:36.357 04:32:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:36.357 04:32:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:36.357 04:32:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:36.357 04:32:39 -- scripts/common.sh@335 -- # IFS=.-: 00:17:36.357 04:32:39 -- scripts/common.sh@335 -- # read -ra ver1 00:17:36.357 04:32:39 -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.357 04:32:39 -- scripts/common.sh@336 -- # read -ra ver2 00:17:36.357 04:32:39 -- scripts/common.sh@337 -- # local 'op=<' 00:17:36.357 04:32:39 -- scripts/common.sh@339 -- # ver1_l=2 00:17:36.357 04:32:39 -- scripts/common.sh@340 -- # ver2_l=1 00:17:36.357 04:32:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:36.357 04:32:39 -- scripts/common.sh@343 -- # case "$op" in 00:17:36.357 04:32:39 -- scripts/common.sh@344 -- # : 1 00:17:36.357 04:32:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:36.357 04:32:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.357 04:32:39 -- scripts/common.sh@364 -- # decimal 1 00:17:36.357 04:32:39 -- scripts/common.sh@352 -- # local d=1 00:17:36.357 04:32:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.357 04:32:39 -- scripts/common.sh@354 -- # echo 1 00:17:36.357 04:32:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:36.357 04:32:39 -- scripts/common.sh@365 -- # decimal 2 00:17:36.357 04:32:39 -- scripts/common.sh@352 -- # local d=2 00:17:36.357 04:32:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.357 04:32:39 -- scripts/common.sh@354 -- # echo 2 00:17:36.357 04:32:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:36.357 04:32:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:36.357 04:32:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:36.357 04:32:39 -- scripts/common.sh@367 -- # return 0 00:17:36.357 04:32:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.357 04:32:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:36.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.357 --rc genhtml_branch_coverage=1 00:17:36.357 --rc genhtml_function_coverage=1 00:17:36.357 --rc genhtml_legend=1 00:17:36.357 --rc geninfo_all_blocks=1 00:17:36.357 --rc geninfo_unexecuted_blocks=1 00:17:36.357 00:17:36.357 ' 00:17:36.357 04:32:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:36.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.357 --rc genhtml_branch_coverage=1 00:17:36.357 --rc genhtml_function_coverage=1 00:17:36.357 --rc genhtml_legend=1 00:17:36.357 --rc geninfo_all_blocks=1 00:17:36.357 --rc geninfo_unexecuted_blocks=1 00:17:36.357 00:17:36.357 ' 00:17:36.357 04:32:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:36.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.357 --rc genhtml_branch_coverage=1 00:17:36.357 --rc genhtml_function_coverage=1 00:17:36.357 --rc genhtml_legend=1 00:17:36.357 --rc geninfo_all_blocks=1 00:17:36.357 --rc geninfo_unexecuted_blocks=1 00:17:36.357 00:17:36.357 ' 00:17:36.357 04:32:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:36.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.357 --rc genhtml_branch_coverage=1 00:17:36.357 --rc genhtml_function_coverage=1 00:17:36.357 --rc genhtml_legend=1 00:17:36.357 --rc geninfo_all_blocks=1 00:17:36.357 --rc geninfo_unexecuted_blocks=1 00:17:36.357 00:17:36.357 ' 00:17:36.357 04:32:39 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.357 04:32:39 -- nvmf/common.sh@7 -- # uname -s 00:17:36.357 04:32:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.357 04:32:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.357 04:32:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.357 04:32:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.357 04:32:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.357 04:32:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.357 04:32:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.357 04:32:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.357 04:32:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.357 04:32:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.357 04:32:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:17:36.357 04:32:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:17:36.357 04:32:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.357 04:32:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.357 04:32:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.357 04:32:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.357 04:32:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.357 04:32:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.357 04:32:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.357 04:32:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.357 04:32:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.357 04:32:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.357 04:32:39 -- paths/export.sh@5 -- # export PATH 00:17:36.357 04:32:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.357 04:32:39 -- nvmf/common.sh@46 -- # : 0 00:17:36.357 04:32:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:36.357 04:32:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:36.357 04:32:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:36.357 04:32:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.357 04:32:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.357 04:32:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:36.357 04:32:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:36.357 04:32:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:36.357 04:32:39 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.357 04:32:39 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.357 04:32:39 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.357 04:32:39 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:36.357 04:32:39 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.357 04:32:39 -- host/timeout.sh@19 -- # nvmftestinit 00:17:36.357 04:32:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:36.357 04:32:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.358 04:32:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:36.358 04:32:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:36.358 04:32:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:36.358 04:32:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.358 04:32:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.358 04:32:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.358 04:32:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:36.358 04:32:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:36.358 04:32:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:36.358 04:32:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:36.358 04:32:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:36.358 04:32:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:36.358 04:32:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.358 04:32:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.358 04:32:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:36.358 04:32:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:36.358 04:32:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.358 04:32:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.358 04:32:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.358 04:32:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.358 04:32:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.358 04:32:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.358 04:32:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.358 04:32:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.358 04:32:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:36.358 04:32:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:36.358 Cannot find device "nvmf_tgt_br" 00:17:36.358 04:32:39 -- nvmf/common.sh@154 -- # true 00:17:36.358 04:32:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.358 Cannot find device "nvmf_tgt_br2" 00:17:36.358 04:32:39 -- nvmf/common.sh@155 -- # true 00:17:36.358 04:32:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:36.358 04:32:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:36.358 Cannot find device "nvmf_tgt_br" 00:17:36.358 04:32:39 -- nvmf/common.sh@157 -- # true 00:17:36.358 04:32:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:36.358 Cannot find device "nvmf_tgt_br2" 00:17:36.358 04:32:39 -- nvmf/common.sh@158 -- # true 00:17:36.358 04:32:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:36.358 04:32:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:36.617 04:32:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.617 04:32:39 -- nvmf/common.sh@161 -- # true 00:17:36.617 04:32:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.617 04:32:39 -- nvmf/common.sh@162 -- # true 00:17:36.617 04:32:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.617 04:32:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.617 04:32:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.617 04:32:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.617 04:32:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.617 04:32:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.617 04:32:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.617 04:32:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:36.617 04:32:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:36.617 04:32:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:36.617 04:32:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:36.617 04:32:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:36.617 04:32:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:36.617 04:32:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.617 04:32:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.617 04:32:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.617 04:32:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:36.617 04:32:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:36.617 04:32:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.617 04:32:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.617 04:32:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.617 04:32:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.617 04:32:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.617 04:32:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:36.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:17:36.617 00:17:36.617 --- 10.0.0.2 ping statistics --- 00:17:36.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.617 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:36.617 04:32:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:36.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:36.617 00:17:36.617 --- 10.0.0.3 ping statistics --- 00:17:36.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.617 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:36.617 04:32:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:36.617 00:17:36.617 --- 10.0.0.1 ping statistics --- 00:17:36.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.617 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:36.617 04:32:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.617 04:32:39 -- nvmf/common.sh@421 -- # return 0 00:17:36.617 04:32:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:36.617 04:32:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.617 04:32:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:36.618 04:32:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:36.618 04:32:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.618 04:32:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:36.618 04:32:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:36.618 04:32:39 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:36.618 04:32:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:36.618 04:32:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.618 04:32:39 -- common/autotest_common.sh@10 -- # set +x 00:17:36.618 04:32:39 -- nvmf/common.sh@469 -- # nvmfpid=73538 00:17:36.618 04:32:39 -- nvmf/common.sh@470 -- # waitforlisten 73538 00:17:36.618 04:32:39 -- common/autotest_common.sh@829 -- # '[' -z 73538 ']' 00:17:36.618 04:32:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:36.618 04:32:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.618 04:32:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.618 04:32:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.618 04:32:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.618 04:32:39 -- common/autotest_common.sh@10 -- # set +x 00:17:36.877 [2024-12-07 04:32:39.889113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.877 [2024-12-07 04:32:39.889202] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.877 [2024-12-07 04:32:40.031490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:36.877 [2024-12-07 04:32:40.098478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:36.877 [2024-12-07 04:32:40.098669] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.877 [2024-12-07 04:32:40.098685] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.877 [2024-12-07 04:32:40.098697] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.877 [2024-12-07 04:32:40.098869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.877 [2024-12-07 04:32:40.098880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.815 04:32:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.815 04:32:40 -- common/autotest_common.sh@862 -- # return 0 00:17:37.815 04:32:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:37.815 04:32:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.815 04:32:40 -- common/autotest_common.sh@10 -- # set +x 00:17:37.815 04:32:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.815 04:32:40 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.815 04:32:40 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:38.074 [2024-12-07 04:32:41.151073] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.074 04:32:41 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:38.333 Malloc0 00:17:38.333 04:32:41 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.592 04:32:41 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.852 04:32:41 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.111 [2024-12-07 04:32:42.122501] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.111 04:32:42 -- host/timeout.sh@32 -- # bdevperf_pid=73587 00:17:39.111 04:32:42 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:39.111 04:32:42 -- host/timeout.sh@34 -- # waitforlisten 73587 /var/tmp/bdevperf.sock 00:17:39.111 04:32:42 -- common/autotest_common.sh@829 -- # '[' -z 73587 ']' 00:17:39.111 04:32:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.111 04:32:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.111 04:32:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.111 04:32:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.111 04:32:42 -- common/autotest_common.sh@10 -- # set +x 00:17:39.111 [2024-12-07 04:32:42.188062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:39.111 [2024-12-07 04:32:42.188175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73587 ] 00:17:39.111 [2024-12-07 04:32:42.318853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.369 [2024-12-07 04:32:42.386886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.303 04:32:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.303 04:32:43 -- common/autotest_common.sh@862 -- # return 0 00:17:40.303 04:32:43 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:40.303 04:32:43 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:40.561 NVMe0n1 00:17:40.819 04:32:43 -- host/timeout.sh@51 -- # rpc_pid=73612 00:17:40.819 04:32:43 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:40.819 04:32:43 -- host/timeout.sh@53 -- # sleep 1 00:17:40.819 Running I/O for 10 seconds... 00:17:41.755 04:32:44 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.020 [2024-12-07 04:32:45.071514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071645] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.021 [2024-12-07 04:32:45.071906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.071913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.071921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.071929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.071953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.071961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.071969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.071992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0480 is same with the state(5) to be set 00:17:42.022 [2024-12-07 04:32:45.072052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.022 [2024-12-07 04:32:45.072083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.022 [2024-12-07 04:32:45.072106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.022 [2024-12-07 04:32:45.072117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.022 [2024-12-07 04:32:45.072131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.022 [2024-12-07 04:32:45.072140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.022 [2024-12-07 04:32:45.072151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.022 [2024-12-07 04:32:45.072160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.022 [2024-12-07 04:32:45.072171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.022 [2024-12-07 04:32:45.072181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.022 [2024-12-07 04:32:45.072192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.023 [2024-12-07 04:32:45.072201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.023 [2024-12-07 04:32:45.072211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.023 [2024-12-07 04:32:45.072220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.023 [2024-12-07 04:32:45.072231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.023 [2024-12-07 04:32:45.072240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.023 [2024-12-07 04:32:45.072251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.023 [2024-12-07 04:32:45.072260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.023 [2024-12-07 04:32:45.072271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.023 [2024-12-07 04:32:45.072280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.023 [2024-12-07 04:32:45.072291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.023 [2024-12-07 04:32:45.072300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.023 [2024-12-07 04:32:45.072311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.023 [2024-12-07 04:32:45.072320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.023 [2024-12-07 04:32:45.072331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.024 [2024-12-07 04:32:45.072340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.024 [2024-12-07 04:32:45.072350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.024 [2024-12-07 04:32:45.072359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.024 [2024-12-07 04:32:45.072370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.024 [2024-12-07 04:32:45.072379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.024 [2024-12-07 04:32:45.072390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.024 [2024-12-07 04:32:45.072400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.024 [2024-12-07 04:32:45.072411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.024 [2024-12-07 04:32:45.072422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.024 [2024-12-07 04:32:45.072433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.024 [2024-12-07 04:32:45.072442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.025 [2024-12-07 04:32:45.072711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.025 [2024-12-07 04:32:45.072725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.072798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.072840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.072861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.072965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.072985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.072996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.073150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.073171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.073192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.073213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.073586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.026 [2024-12-07 04:32:45.073627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.026 [2024-12-07 04:32:45.073647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.026 [2024-12-07 04:32:45.073662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.073910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.073951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.073975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.073987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.073998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.027 [2024-12-07 04:32:45.074574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.027 [2024-12-07 04:32:45.074754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baa0c0 is same with the state(5) to be set 00:17:42.027 [2024-12-07 04:32:45.074777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:42.027 [2024-12-07 04:32:45.074785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:42.027 [2024-12-07 04:32:45.074794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126320 len:8 PRP1 0x0 PRP2 0x0 00:17:42.027 [2024-12-07 04:32:45.074803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.027 [2024-12-07 04:32:45.074845] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1baa0c0 was disconnected and freed. reset controller. 00:17:42.027 [2024-12-07 04:32:45.074924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.028 [2024-12-07 04:32:45.074953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.028 [2024-12-07 04:32:45.074966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.028 [2024-12-07 04:32:45.074975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.028 [2024-12-07 04:32:45.074985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.028 [2024-12-07 04:32:45.074995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.028 [2024-12-07 04:32:45.075005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:42.028 [2024-12-07 04:32:45.075014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.028 [2024-12-07 04:32:45.075023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b47010 is same with the state(5) to be set 00:17:42.028 [2024-12-07 04:32:45.075245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:42.028 [2024-12-07 04:32:45.075275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47010 (9): Bad file descriptor 00:17:42.028 [2024-12-07 04:32:45.075383] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:42.028 [2024-12-07 04:32:45.075462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:42.028 [2024-12-07 04:32:45.075514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:42.028 [2024-12-07 04:32:45.075530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b47010 with addr=10.0.0.2, port=4420 00:17:42.028 [2024-12-07 04:32:45.075542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b47010 is same with the state(5) to be set 00:17:42.028 [2024-12-07 04:32:45.075575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47010 (9): Bad file descriptor 00:17:42.028 [2024-12-07 04:32:45.075598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:42.028 [2024-12-07 04:32:45.075608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:42.028 [2024-12-07 04:32:45.075618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:42.028 [2024-12-07 04:32:45.075660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:42.028 [2024-12-07 04:32:45.075674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:42.028 04:32:45 -- host/timeout.sh@56 -- # sleep 2 00:17:43.928 [2024-12-07 04:32:47.075813] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.928 [2024-12-07 04:32:47.075937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.928 [2024-12-07 04:32:47.075980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.928 [2024-12-07 04:32:47.075996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b47010 with addr=10.0.0.2, port=4420 00:17:43.928 [2024-12-07 04:32:47.076023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b47010 is same with the state(5) to be set 00:17:43.928 [2024-12-07 04:32:47.076049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47010 (9): Bad file descriptor 00:17:43.928 [2024-12-07 04:32:47.076067] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:43.928 [2024-12-07 04:32:47.076076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:43.928 [2024-12-07 04:32:47.076086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:43.928 [2024-12-07 04:32:47.076111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:43.928 [2024-12-07 04:32:47.076122] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.929 04:32:47 -- host/timeout.sh@57 -- # get_controller 00:17:43.929 04:32:47 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:43.929 04:32:47 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:44.185 04:32:47 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:44.186 04:32:47 -- host/timeout.sh@58 -- # get_bdev 00:17:44.186 04:32:47 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:44.186 04:32:47 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:44.443 04:32:47 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:44.443 04:32:47 -- host/timeout.sh@61 -- # sleep 5 00:17:45.905 [2024-12-07 04:32:49.076245] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.905 [2024-12-07 04:32:49.076373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.905 [2024-12-07 04:32:49.076415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.905 [2024-12-07 04:32:49.076431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b47010 with addr=10.0.0.2, port=4420 00:17:45.905 [2024-12-07 04:32:49.076443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b47010 is same with the state(5) to be set 00:17:45.905 [2024-12-07 04:32:49.076469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47010 (9): Bad file descriptor 00:17:45.905 [2024-12-07 04:32:49.076487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.905 [2024-12-07 04:32:49.076496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:45.905 [2024-12-07 04:32:49.076505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.905 [2024-12-07 04:32:49.076530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:45.905 [2024-12-07 04:32:49.076541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:48.441 [2024-12-07 04:32:51.076568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:48.441 [2024-12-07 04:32:51.076673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:48.441 [2024-12-07 04:32:51.076687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:48.441 [2024-12-07 04:32:51.076698] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:48.441 [2024-12-07 04:32:51.076724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:49.008 00:17:49.008 Latency(us) 00:17:49.008 [2024-12-07T04:32:52.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.008 [2024-12-07T04:32:52.248Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:49.008 Verification LBA range: start 0x0 length 0x4000 00:17:49.008 NVMe0n1 : 8.16 1926.71 7.53 15.68 0.00 65797.26 3187.43 7015926.69 00:17:49.008 [2024-12-07T04:32:52.248Z] =================================================================================================================== 00:17:49.008 [2024-12-07T04:32:52.249Z] Total : 1926.71 7.53 15.68 0.00 65797.26 3187.43 7015926.69 00:17:49.009 0 00:17:49.577 04:32:52 -- host/timeout.sh@62 -- # get_controller 00:17:49.577 04:32:52 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:49.577 04:32:52 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:49.837 04:32:52 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:49.837 04:32:52 -- host/timeout.sh@63 -- # get_bdev 00:17:49.837 04:32:52 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:49.837 04:32:52 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:50.096 04:32:53 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:50.096 04:32:53 -- host/timeout.sh@65 -- # wait 73612 00:17:50.096 04:32:53 -- host/timeout.sh@67 -- # killprocess 73587 00:17:50.096 04:32:53 -- common/autotest_common.sh@936 -- # '[' -z 73587 ']' 00:17:50.096 04:32:53 -- common/autotest_common.sh@940 -- # kill -0 73587 00:17:50.096 04:32:53 -- common/autotest_common.sh@941 -- # uname 00:17:50.096 04:32:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.096 04:32:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73587 00:17:50.096 killing process with pid 73587 00:17:50.096 Received shutdown signal, test time was about 9.293957 seconds 00:17:50.096 00:17:50.096 Latency(us) 00:17:50.096 [2024-12-07T04:32:53.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.096 [2024-12-07T04:32:53.336Z] =================================================================================================================== 00:17:50.096 [2024-12-07T04:32:53.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.096 04:32:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:50.096 04:32:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:50.096 04:32:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73587' 00:17:50.096 04:32:53 -- common/autotest_common.sh@955 -- # kill 73587 00:17:50.096 04:32:53 -- common/autotest_common.sh@960 -- # wait 73587 00:17:50.356 04:32:53 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.356 [2024-12-07 04:32:53.579475] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.615 04:32:53 -- host/timeout.sh@74 -- # bdevperf_pid=73729 00:17:50.615 04:32:53 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:50.615 04:32:53 -- host/timeout.sh@76 -- # waitforlisten 73729 /var/tmp/bdevperf.sock 00:17:50.615 04:32:53 -- common/autotest_common.sh@829 -- # '[' -z 73729 ']' 00:17:50.615 04:32:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.615 04:32:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.615 04:32:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.615 04:32:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.615 04:32:53 -- common/autotest_common.sh@10 -- # set +x 00:17:50.615 [2024-12-07 04:32:53.650763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:50.615 [2024-12-07 04:32:53.651081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73729 ] 00:17:50.615 [2024-12-07 04:32:53.789387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.615 [2024-12-07 04:32:53.847127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.553 04:32:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.553 04:32:54 -- common/autotest_common.sh@862 -- # return 0 00:17:51.553 04:32:54 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:51.813 04:32:54 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:52.070 NVMe0n1 00:17:52.070 04:32:55 -- host/timeout.sh@84 -- # rpc_pid=73758 00:17:52.070 04:32:55 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.070 04:32:55 -- host/timeout.sh@86 -- # sleep 1 00:17:52.070 Running I/O for 10 seconds... 00:17:53.005 04:32:56 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.267 [2024-12-07 04:32:56.450489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13107b0 is same with the state(5) to be set 00:17:53.267 [2024-12-07 04:32:56.450849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.450879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.450899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.450909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.450921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.450930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.450940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.450949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.450960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.450969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.450979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.450988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.450998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.267 [2024-12-07 04:32:56.451462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.267 [2024-12-07 04:32:56.451472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.451799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.451983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.451995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.452024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.452045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.452065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.452105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.268 [2024-12-07 04:32:56.452166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.268 [2024-12-07 04:32:56.452291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.268 [2024-12-07 04:32:56.452302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.452961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.452981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.452992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.269 [2024-12-07 04:32:56.453001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.453012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.453021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.453032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.453042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.453053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.453062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.453073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.453083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.453094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.453103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.453114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.269 [2024-12-07 04:32:56.453122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.269 [2024-12-07 04:32:56.453133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.270 [2024-12-07 04:32:56.453306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.270 [2024-12-07 04:32:56.453346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.270 [2024-12-07 04:32:56.453367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.270 [2024-12-07 04:32:56.453387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.270 [2024-12-07 04:32:56.453407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.270 [2024-12-07 04:32:56.453572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab10c0 is same with the state(5) to be set 00:17:53.270 [2024-12-07 04:32:56.453595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.270 [2024-12-07 04:32:56.453602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.270 [2024-12-07 04:32:56.453610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1104 len:8 PRP1 0x0 PRP2 0x0 00:17:53.270 [2024-12-07 04:32:56.453619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.270 [2024-12-07 04:32:56.453672] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab10c0 was disconnected and freed. reset controller. 00:17:53.270 [2024-12-07 04:32:56.453925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.270 [2024-12-07 04:32:56.454010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:17:53.270 [2024-12-07 04:32:56.454111] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:53.270 [2024-12-07 04:32:56.454186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:53.270 [2024-12-07 04:32:56.454229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:53.270 [2024-12-07 04:32:56.454246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e010 with addr=10.0.0.2, port=4420 00:17:53.270 [2024-12-07 04:32:56.454257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4e010 is same with the state(5) to be set 00:17:53.270 [2024-12-07 04:32:56.454277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:17:53.270 [2024-12-07 04:32:56.454293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:53.270 [2024-12-07 04:32:56.454303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:53.270 [2024-12-07 04:32:56.454313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.270 [2024-12-07 04:32:56.454333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:53.270 [2024-12-07 04:32:56.454343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.270 04:32:56 -- host/timeout.sh@90 -- # sleep 1 00:17:54.647 [2024-12-07 04:32:57.454469] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.647 [2024-12-07 04:32:57.454575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.647 [2024-12-07 04:32:57.454618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.647 [2024-12-07 04:32:57.454634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e010 with addr=10.0.0.2, port=4420 00:17:54.647 [2024-12-07 04:32:57.454647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4e010 is same with the state(5) to be set 00:17:54.647 [2024-12-07 04:32:57.454704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:17:54.647 [2024-12-07 04:32:57.454753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:54.647 [2024-12-07 04:32:57.454763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:54.647 [2024-12-07 04:32:57.454773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.647 [2024-12-07 04:32:57.454814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:54.647 [2024-12-07 04:32:57.454825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:54.647 04:32:57 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.648 [2024-12-07 04:32:57.712319] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.648 04:32:57 -- host/timeout.sh@92 -- # wait 73758 00:17:55.582 [2024-12-07 04:32:58.467094] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:02.147 00:18:02.147 Latency(us) 00:18:02.147 [2024-12-07T04:33:05.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.147 [2024-12-07T04:33:05.387Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:02.147 Verification LBA range: start 0x0 length 0x4000 00:18:02.147 NVMe0n1 : 10.01 9714.56 37.95 0.00 0.00 13154.52 975.59 3019898.88 00:18:02.147 [2024-12-07T04:33:05.387Z] =================================================================================================================== 00:18:02.147 [2024-12-07T04:33:05.387Z] Total : 9714.56 37.95 0.00 0.00 13154.52 975.59 3019898.88 00:18:02.147 0 00:18:02.147 04:33:05 -- host/timeout.sh@97 -- # rpc_pid=73863 00:18:02.147 04:33:05 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.147 04:33:05 -- host/timeout.sh@98 -- # sleep 1 00:18:02.405 Running I/O for 10 seconds... 00:18:03.343 04:33:06 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.343 [2024-12-07 04:33:06.558445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130f4a0 is same with the state(5) to be set 00:18:03.343 [2024-12-07 04:33:06.558923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.558955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.558976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.558987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.558999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-12-07 04:33:06.559284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-12-07 04:33:06.559296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.559669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.559710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.559731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.559791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.559831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.559984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.559993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.560005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.560013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.560025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.560034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.560045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.560054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.560065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.344 [2024-12-07 04:33:06.560073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.560084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.560094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.560104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-12-07 04:33:06.560114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-12-07 04:33:06.560125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-12-07 04:33:06.560916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-12-07 04:33:06.560937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-12-07 04:33:06.560948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-12-07 04:33:06.560957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.560969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.560978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.560989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.560999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-12-07 04:33:06.561020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-12-07 04:33:06.561204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-12-07 04:33:06.561225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-12-07 04:33:06.561265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-12-07 04:33:06.561285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-12-07 04:33:06.561413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.346 [2024-12-07 04:33:06.561575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4900 is same with the state(5) to be set 00:18:03.346 [2024-12-07 04:33:06.561597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:03.346 [2024-12-07 04:33:06.561605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:03.346 [2024-12-07 04:33:06.561613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126320 len:8 PRP1 0x0 PRP2 0x0 00:18:03.346 [2024-12-07 04:33:06.561622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-12-07 04:33:06.561675] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac4900 was disconnected and freed. reset controller. 00:18:03.346 [2024-12-07 04:33:06.561919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.346 [2024-12-07 04:33:06.561990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:18:03.346 [2024-12-07 04:33:06.562091] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.346 [2024-12-07 04:33:06.562141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.346 [2024-12-07 04:33:06.562182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.346 [2024-12-07 04:33:06.562198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e010 with addr=10.0.0.2, port=4420 00:18:03.346 [2024-12-07 04:33:06.562208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4e010 is same with the state(5) to be set 00:18:03.346 [2024-12-07 04:33:06.562227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:18:03.346 [2024-12-07 04:33:06.562245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:03.346 [2024-12-07 04:33:06.562254] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:03.346 [2024-12-07 04:33:06.562264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.346 [2024-12-07 04:33:06.562284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:03.346 [2024-12-07 04:33:06.562295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.346 04:33:06 -- host/timeout.sh@101 -- # sleep 3 00:18:04.723 [2024-12-07 04:33:07.562423] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.723 [2024-12-07 04:33:07.562552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.723 [2024-12-07 04:33:07.562595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.723 [2024-12-07 04:33:07.562611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e010 with addr=10.0.0.2, port=4420 00:18:04.723 [2024-12-07 04:33:07.562640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4e010 is same with the state(5) to be set 00:18:04.723 [2024-12-07 04:33:07.562680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:18:04.723 [2024-12-07 04:33:07.562715] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:04.723 [2024-12-07 04:33:07.562725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:04.723 [2024-12-07 04:33:07.562734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:04.723 [2024-12-07 04:33:07.562759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.723 [2024-12-07 04:33:07.562770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:05.677 [2024-12-07 04:33:08.562914] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.677 [2024-12-07 04:33:08.563044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.677 [2024-12-07 04:33:08.563086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.677 [2024-12-07 04:33:08.563101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e010 with addr=10.0.0.2, port=4420 00:18:05.677 [2024-12-07 04:33:08.563114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4e010 is same with the state(5) to be set 00:18:05.677 [2024-12-07 04:33:08.563140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:18:05.677 [2024-12-07 04:33:08.563158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:05.677 [2024-12-07 04:33:08.563167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:05.677 [2024-12-07 04:33:08.563177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:05.677 [2024-12-07 04:33:08.563203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:05.677 [2024-12-07 04:33:08.563213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:06.613 [2024-12-07 04:33:09.564745] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.613 [2024-12-07 04:33:09.564868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.613 [2024-12-07 04:33:09.564908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:06.613 [2024-12-07 04:33:09.564924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a4e010 with addr=10.0.0.2, port=4420 00:18:06.613 [2024-12-07 04:33:09.564936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4e010 is same with the state(5) to be set 00:18:06.613 [2024-12-07 04:33:09.565127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4e010 (9): Bad file descriptor 00:18:06.613 [2024-12-07 04:33:09.565308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:06.613 [2024-12-07 04:33:09.565336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:06.613 [2024-12-07 04:33:09.565347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:06.613 [2024-12-07 04:33:09.567828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:06.613 [2024-12-07 04:33:09.567875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:06.613 04:33:09 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.613 [2024-12-07 04:33:09.830596] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.613 04:33:09 -- host/timeout.sh@103 -- # wait 73863 00:18:07.549 [2024-12-07 04:33:10.596784] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:12.820 00:18:12.820 Latency(us) 00:18:12.820 [2024-12-07T04:33:16.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.820 [2024-12-07T04:33:16.060Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:12.820 Verification LBA range: start 0x0 length 0x4000 00:18:12.820 NVMe0n1 : 10.01 8459.75 33.05 5983.94 0.00 8846.51 459.87 3019898.88 00:18:12.820 [2024-12-07T04:33:16.060Z] =================================================================================================================== 00:18:12.820 [2024-12-07T04:33:16.060Z] Total : 8459.75 33.05 5983.94 0.00 8846.51 0.00 3019898.88 00:18:12.820 0 00:18:12.820 04:33:15 -- host/timeout.sh@105 -- # killprocess 73729 00:18:12.820 04:33:15 -- common/autotest_common.sh@936 -- # '[' -z 73729 ']' 00:18:12.820 04:33:15 -- common/autotest_common.sh@940 -- # kill -0 73729 00:18:12.820 04:33:15 -- common/autotest_common.sh@941 -- # uname 00:18:12.820 04:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.820 04:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73729 00:18:12.820 killing process with pid 73729 00:18:12.820 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.820 00:18:12.820 Latency(us) 00:18:12.820 [2024-12-07T04:33:16.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.820 [2024-12-07T04:33:16.060Z] =================================================================================================================== 00:18:12.820 [2024-12-07T04:33:16.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.820 04:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:12.820 04:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:12.820 04:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73729' 00:18:12.820 04:33:15 -- common/autotest_common.sh@955 -- # kill 73729 00:18:12.820 04:33:15 -- common/autotest_common.sh@960 -- # wait 73729 00:18:12.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.820 04:33:15 -- host/timeout.sh@110 -- # bdevperf_pid=73977 00:18:12.820 04:33:15 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:12.820 04:33:15 -- host/timeout.sh@112 -- # waitforlisten 73977 /var/tmp/bdevperf.sock 00:18:12.820 04:33:15 -- common/autotest_common.sh@829 -- # '[' -z 73977 ']' 00:18:12.820 04:33:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.820 04:33:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.820 04:33:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.820 04:33:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.820 04:33:15 -- common/autotest_common.sh@10 -- # set +x 00:18:12.821 [2024-12-07 04:33:15.713845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:12.821 [2024-12-07 04:33:15.713933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73977 ] 00:18:12.821 [2024-12-07 04:33:15.846774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.821 [2024-12-07 04:33:15.904284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.756 04:33:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.756 04:33:16 -- common/autotest_common.sh@862 -- # return 0 00:18:13.756 04:33:16 -- host/timeout.sh@116 -- # dtrace_pid=73993 00:18:13.756 04:33:16 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 73977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:13.756 04:33:16 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:14.013 04:33:17 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:14.270 NVMe0n1 00:18:14.270 04:33:17 -- host/timeout.sh@124 -- # rpc_pid=74033 00:18:14.270 04:33:17 -- host/timeout.sh@125 -- # sleep 1 00:18:14.270 04:33:17 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.270 Running I/O for 10 seconds... 00:18:15.202 04:33:18 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.463 [2024-12-07 04:33:18.618184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.463 [2024-12-07 04:33:18.618529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.618999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619318] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c47f0 is same with the state(5) to be set 00:18:15.464 [2024-12-07 04:33:18.619435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.464 [2024-12-07 04:33:18.619984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.464 [2024-12-07 04:33:18.619994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.620985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.620998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.465 [2024-12-07 04:33:18.621545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.465 [2024-12-07 04:33:18.621555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.621988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.621998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.466 [2024-12-07 04:33:18.622308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2c0c0 is same with the state(5) to be set 00:18:15.466 [2024-12-07 04:33:18.622332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.466 [2024-12-07 04:33:18.622340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.466 [2024-12-07 04:33:18.622348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107808 len:8 PRP1 0x0 PRP2 0x0 00:18:15.466 [2024-12-07 04:33:18.622358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.466 [2024-12-07 04:33:18.622399] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc2c0c0 was disconnected and freed. reset controller. 00:18:15.466 [2024-12-07 04:33:18.622685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:15.466 [2024-12-07 04:33:18.622761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc9010 (9): Bad file descriptor 00:18:15.466 [2024-12-07 04:33:18.622871] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.466 [2024-12-07 04:33:18.622942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.466 [2024-12-07 04:33:18.622988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.466 [2024-12-07 04:33:18.623005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc9010 with addr=10.0.0.2, port=4420 00:18:15.466 [2024-12-07 04:33:18.623016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9010 is same with the state(5) to be set 00:18:15.466 [2024-12-07 04:33:18.623037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc9010 (9): Bad file descriptor 00:18:15.466 [2024-12-07 04:33:18.623053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.466 [2024-12-07 04:33:18.623063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:15.466 [2024-12-07 04:33:18.623074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.466 [2024-12-07 04:33:18.623094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:15.466 [2024-12-07 04:33:18.623105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:15.466 04:33:18 -- host/timeout.sh@128 -- # wait 74033 00:18:17.994 [2024-12-07 04:33:20.623633] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.994 [2024-12-07 04:33:20.623848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.994 [2024-12-07 04:33:20.623922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.994 [2024-12-07 04:33:20.623943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc9010 with addr=10.0.0.2, port=4420 00:18:17.994 [2024-12-07 04:33:20.623959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9010 is same with the state(5) to be set 00:18:17.994 [2024-12-07 04:33:20.624001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc9010 (9): Bad file descriptor 00:18:17.994 [2024-12-07 04:33:20.624024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.994 [2024-12-07 04:33:20.624045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:17.994 [2024-12-07 04:33:20.624058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.994 [2024-12-07 04:33:20.624090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.994 [2024-12-07 04:33:20.624104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:19.426 [2024-12-07 04:33:22.624261] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.426 [2024-12-07 04:33:22.624382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.426 [2024-12-07 04:33:22.624427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.426 [2024-12-07 04:33:22.624443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc9010 with addr=10.0.0.2, port=4420 00:18:19.426 [2024-12-07 04:33:22.624457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc9010 is same with the state(5) to be set 00:18:19.426 [2024-12-07 04:33:22.624483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc9010 (9): Bad file descriptor 00:18:19.426 [2024-12-07 04:33:22.624514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:19.426 [2024-12-07 04:33:22.624525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:19.426 [2024-12-07 04:33:22.624536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:19.426 [2024-12-07 04:33:22.624563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.426 [2024-12-07 04:33:22.624590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.961 [2024-12-07 04:33:24.624681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.961 [2024-12-07 04:33:24.624745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.961 [2024-12-07 04:33:24.624774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:21.961 [2024-12-07 04:33:24.624784] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:21.961 [2024-12-07 04:33:24.624812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.529 00:18:22.529 Latency(us) 00:18:22.529 [2024-12-07T04:33:25.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.529 [2024-12-07T04:33:25.769Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:22.529 NVMe0n1 : 8.15 2213.59 8.65 15.71 0.00 57323.22 7357.91 7046430.72 00:18:22.529 [2024-12-07T04:33:25.769Z] =================================================================================================================== 00:18:22.529 [2024-12-07T04:33:25.769Z] Total : 2213.59 8.65 15.71 0.00 57323.22 7357.91 7046430.72 00:18:22.529 0 00:18:22.529 04:33:25 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.529 Attaching 5 probes... 00:18:22.529 1346.361502: reset bdev controller NVMe0 00:18:22.529 1346.483416: reconnect bdev controller NVMe0 00:18:22.529 3347.100597: reconnect delay bdev controller NVMe0 00:18:22.529 3347.136737: reconnect bdev controller NVMe0 00:18:22.529 5347.812547: reconnect delay bdev controller NVMe0 00:18:22.529 5347.848380: reconnect bdev controller NVMe0 00:18:22.529 7348.312982: reconnect delay bdev controller NVMe0 00:18:22.529 7348.354442: reconnect bdev controller NVMe0 00:18:22.529 04:33:25 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:22.529 04:33:25 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:22.529 04:33:25 -- host/timeout.sh@136 -- # kill 73993 00:18:22.529 04:33:25 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.529 04:33:25 -- host/timeout.sh@139 -- # killprocess 73977 00:18:22.529 04:33:25 -- common/autotest_common.sh@936 -- # '[' -z 73977 ']' 00:18:22.529 04:33:25 -- common/autotest_common.sh@940 -- # kill -0 73977 00:18:22.529 04:33:25 -- common/autotest_common.sh@941 -- # uname 00:18:22.529 04:33:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.529 04:33:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73977 00:18:22.529 killing process with pid 73977 00:18:22.529 Received shutdown signal, test time was about 8.219990 seconds 00:18:22.529 00:18:22.529 Latency(us) 00:18:22.529 [2024-12-07T04:33:25.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.529 [2024-12-07T04:33:25.769Z] =================================================================================================================== 00:18:22.529 [2024-12-07T04:33:25.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.530 04:33:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:22.530 04:33:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:22.530 04:33:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73977' 00:18:22.530 04:33:25 -- common/autotest_common.sh@955 -- # kill 73977 00:18:22.530 04:33:25 -- common/autotest_common.sh@960 -- # wait 73977 00:18:22.788 04:33:25 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.047 04:33:26 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:23.047 04:33:26 -- host/timeout.sh@145 -- # nvmftestfini 00:18:23.047 04:33:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:23.047 04:33:26 -- nvmf/common.sh@116 -- # sync 00:18:23.047 04:33:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:23.047 04:33:26 -- nvmf/common.sh@119 -- # set +e 00:18:23.047 04:33:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:23.047 04:33:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:23.047 rmmod nvme_tcp 00:18:23.047 rmmod nvme_fabrics 00:18:23.047 rmmod nvme_keyring 00:18:23.047 04:33:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:23.047 04:33:26 -- nvmf/common.sh@123 -- # set -e 00:18:23.047 04:33:26 -- nvmf/common.sh@124 -- # return 0 00:18:23.047 04:33:26 -- nvmf/common.sh@477 -- # '[' -n 73538 ']' 00:18:23.047 04:33:26 -- nvmf/common.sh@478 -- # killprocess 73538 00:18:23.047 04:33:26 -- common/autotest_common.sh@936 -- # '[' -z 73538 ']' 00:18:23.047 04:33:26 -- common/autotest_common.sh@940 -- # kill -0 73538 00:18:23.047 04:33:26 -- common/autotest_common.sh@941 -- # uname 00:18:23.047 04:33:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:23.047 04:33:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73538 00:18:23.306 killing process with pid 73538 00:18:23.306 04:33:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:23.306 04:33:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:23.306 04:33:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73538' 00:18:23.306 04:33:26 -- common/autotest_common.sh@955 -- # kill 73538 00:18:23.306 04:33:26 -- common/autotest_common.sh@960 -- # wait 73538 00:18:23.306 04:33:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:23.306 04:33:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:23.306 04:33:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:23.306 04:33:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.306 04:33:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:23.306 04:33:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.306 04:33:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.306 04:33:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.306 04:33:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:23.306 ************************************ 00:18:23.306 END TEST nvmf_timeout 00:18:23.306 ************************************ 00:18:23.306 00:18:23.306 real 0m47.218s 00:18:23.306 user 2m19.135s 00:18:23.306 sys 0m5.467s 00:18:23.306 04:33:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:23.306 04:33:26 -- common/autotest_common.sh@10 -- # set +x 00:18:23.565 04:33:26 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:23.565 04:33:26 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:23.565 04:33:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:23.565 04:33:26 -- common/autotest_common.sh@10 -- # set +x 00:18:23.565 04:33:26 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:23.565 00:18:23.565 real 10m34.393s 00:18:23.565 user 29m37.995s 00:18:23.565 sys 3m20.722s 00:18:23.565 ************************************ 00:18:23.565 END TEST nvmf_tcp 00:18:23.565 ************************************ 00:18:23.565 04:33:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:23.565 04:33:26 -- common/autotest_common.sh@10 -- # set +x 00:18:23.565 04:33:26 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:18:23.565 04:33:26 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:23.565 04:33:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:23.565 04:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:23.565 04:33:26 -- common/autotest_common.sh@10 -- # set +x 00:18:23.565 ************************************ 00:18:23.565 START TEST nvmf_dif 00:18:23.565 ************************************ 00:18:23.565 04:33:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:23.565 * Looking for test storage... 00:18:23.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:23.565 04:33:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:23.565 04:33:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:23.565 04:33:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:23.824 04:33:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:23.824 04:33:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:23.824 04:33:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:23.824 04:33:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:23.825 04:33:26 -- scripts/common.sh@335 -- # IFS=.-: 00:18:23.825 04:33:26 -- scripts/common.sh@335 -- # read -ra ver1 00:18:23.825 04:33:26 -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.825 04:33:26 -- scripts/common.sh@336 -- # read -ra ver2 00:18:23.825 04:33:26 -- scripts/common.sh@337 -- # local 'op=<' 00:18:23.825 04:33:26 -- scripts/common.sh@339 -- # ver1_l=2 00:18:23.825 04:33:26 -- scripts/common.sh@340 -- # ver2_l=1 00:18:23.825 04:33:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:23.825 04:33:26 -- scripts/common.sh@343 -- # case "$op" in 00:18:23.825 04:33:26 -- scripts/common.sh@344 -- # : 1 00:18:23.825 04:33:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:23.825 04:33:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.825 04:33:26 -- scripts/common.sh@364 -- # decimal 1 00:18:23.825 04:33:26 -- scripts/common.sh@352 -- # local d=1 00:18:23.825 04:33:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.825 04:33:26 -- scripts/common.sh@354 -- # echo 1 00:18:23.825 04:33:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:23.825 04:33:26 -- scripts/common.sh@365 -- # decimal 2 00:18:23.825 04:33:26 -- scripts/common.sh@352 -- # local d=2 00:18:23.825 04:33:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.825 04:33:26 -- scripts/common.sh@354 -- # echo 2 00:18:23.825 04:33:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:23.825 04:33:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:23.825 04:33:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:23.825 04:33:26 -- scripts/common.sh@367 -- # return 0 00:18:23.825 04:33:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.825 04:33:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:23.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.825 --rc genhtml_branch_coverage=1 00:18:23.825 --rc genhtml_function_coverage=1 00:18:23.825 --rc genhtml_legend=1 00:18:23.825 --rc geninfo_all_blocks=1 00:18:23.825 --rc geninfo_unexecuted_blocks=1 00:18:23.825 00:18:23.825 ' 00:18:23.825 04:33:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:23.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.825 --rc genhtml_branch_coverage=1 00:18:23.825 --rc genhtml_function_coverage=1 00:18:23.825 --rc genhtml_legend=1 00:18:23.825 --rc geninfo_all_blocks=1 00:18:23.825 --rc geninfo_unexecuted_blocks=1 00:18:23.825 00:18:23.825 ' 00:18:23.825 04:33:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:23.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.825 --rc genhtml_branch_coverage=1 00:18:23.825 --rc genhtml_function_coverage=1 00:18:23.825 --rc genhtml_legend=1 00:18:23.825 --rc geninfo_all_blocks=1 00:18:23.825 --rc geninfo_unexecuted_blocks=1 00:18:23.825 00:18:23.825 ' 00:18:23.825 04:33:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:23.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.825 --rc genhtml_branch_coverage=1 00:18:23.825 --rc genhtml_function_coverage=1 00:18:23.825 --rc genhtml_legend=1 00:18:23.825 --rc geninfo_all_blocks=1 00:18:23.825 --rc geninfo_unexecuted_blocks=1 00:18:23.825 00:18:23.825 ' 00:18:23.825 04:33:26 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:23.825 04:33:26 -- nvmf/common.sh@7 -- # uname -s 00:18:23.825 04:33:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.825 04:33:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.825 04:33:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.825 04:33:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.825 04:33:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.825 04:33:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.825 04:33:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.825 04:33:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.825 04:33:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.825 04:33:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.825 04:33:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:18:23.825 04:33:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:18:23.825 04:33:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.825 04:33:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.825 04:33:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:23.825 04:33:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:23.825 04:33:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.825 04:33:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.825 04:33:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.825 04:33:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.825 04:33:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.825 04:33:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.825 04:33:26 -- paths/export.sh@5 -- # export PATH 00:18:23.825 04:33:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.825 04:33:26 -- nvmf/common.sh@46 -- # : 0 00:18:23.825 04:33:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:23.825 04:33:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:23.825 04:33:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:23.825 04:33:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.825 04:33:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.825 04:33:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:23.825 04:33:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:23.825 04:33:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:23.825 04:33:26 -- target/dif.sh@15 -- # NULL_META=16 00:18:23.825 04:33:26 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:23.825 04:33:26 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:23.825 04:33:26 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:23.825 04:33:26 -- target/dif.sh@135 -- # nvmftestinit 00:18:23.825 04:33:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:23.825 04:33:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.825 04:33:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:23.825 04:33:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:23.825 04:33:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:23.825 04:33:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.825 04:33:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:23.825 04:33:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.825 04:33:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:23.825 04:33:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:23.825 04:33:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:23.825 04:33:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:23.825 04:33:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:23.825 04:33:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:23.825 04:33:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.825 04:33:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.825 04:33:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:23.825 04:33:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:23.825 04:33:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:23.825 04:33:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:23.825 04:33:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:23.825 04:33:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.825 04:33:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:23.825 04:33:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:23.825 04:33:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:23.825 04:33:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:23.825 04:33:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:23.825 04:33:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:23.825 Cannot find device "nvmf_tgt_br" 00:18:23.825 04:33:26 -- nvmf/common.sh@154 -- # true 00:18:23.825 04:33:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:23.825 Cannot find device "nvmf_tgt_br2" 00:18:23.825 04:33:26 -- nvmf/common.sh@155 -- # true 00:18:23.825 04:33:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:23.826 04:33:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:23.826 Cannot find device "nvmf_tgt_br" 00:18:23.826 04:33:26 -- nvmf/common.sh@157 -- # true 00:18:23.826 04:33:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:23.826 Cannot find device "nvmf_tgt_br2" 00:18:23.826 04:33:26 -- nvmf/common.sh@158 -- # true 00:18:23.826 04:33:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:23.826 04:33:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:23.826 04:33:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:23.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.826 04:33:26 -- nvmf/common.sh@161 -- # true 00:18:23.826 04:33:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:23.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:23.826 04:33:27 -- nvmf/common.sh@162 -- # true 00:18:23.826 04:33:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:23.826 04:33:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:23.826 04:33:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:23.826 04:33:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:23.826 04:33:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:23.826 04:33:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:23.826 04:33:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:23.826 04:33:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:24.085 04:33:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:24.085 04:33:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:24.085 04:33:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:24.085 04:33:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:24.085 04:33:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:24.085 04:33:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.085 04:33:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.085 04:33:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.085 04:33:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:24.085 04:33:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:24.085 04:33:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.085 04:33:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.085 04:33:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.085 04:33:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.085 04:33:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.085 04:33:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:24.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:24.085 00:18:24.085 --- 10.0.0.2 ping statistics --- 00:18:24.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.085 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:24.085 04:33:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:24.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:18:24.085 00:18:24.085 --- 10.0.0.3 ping statistics --- 00:18:24.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.085 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:24.085 04:33:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:24.085 00:18:24.085 --- 10.0.0.1 ping statistics --- 00:18:24.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.085 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:24.086 04:33:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.086 04:33:27 -- nvmf/common.sh@421 -- # return 0 00:18:24.086 04:33:27 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:24.086 04:33:27 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:24.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:24.346 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:24.346 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:24.346 04:33:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.346 04:33:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:24.346 04:33:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:24.346 04:33:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.346 04:33:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:24.346 04:33:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:24.346 04:33:27 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:24.346 04:33:27 -- target/dif.sh@137 -- # nvmfappstart 00:18:24.346 04:33:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:24.346 04:33:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:24.346 04:33:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.346 04:33:27 -- nvmf/common.sh@469 -- # nvmfpid=74475 00:18:24.346 04:33:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:24.346 04:33:27 -- nvmf/common.sh@470 -- # waitforlisten 74475 00:18:24.346 04:33:27 -- common/autotest_common.sh@829 -- # '[' -z 74475 ']' 00:18:24.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.346 04:33:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.346 04:33:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.346 04:33:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.346 04:33:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.346 04:33:27 -- common/autotest_common.sh@10 -- # set +x 00:18:24.605 [2024-12-07 04:33:27.634385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:24.605 [2024-12-07 04:33:27.635000] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.605 [2024-12-07 04:33:27.775990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.864 [2024-12-07 04:33:27.845055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:24.864 [2024-12-07 04:33:27.845238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.864 [2024-12-07 04:33:27.845256] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.864 [2024-12-07 04:33:27.845267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.864 [2024-12-07 04:33:27.845296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.430 04:33:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.430 04:33:28 -- common/autotest_common.sh@862 -- # return 0 00:18:25.430 04:33:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:25.430 04:33:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:25.430 04:33:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.688 04:33:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.688 04:33:28 -- target/dif.sh@139 -- # create_transport 00:18:25.688 04:33:28 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:25.688 04:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.688 04:33:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.688 [2024-12-07 04:33:28.687454] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.688 04:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.688 04:33:28 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:25.688 04:33:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:25.688 04:33:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:25.688 04:33:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.688 ************************************ 00:18:25.688 START TEST fio_dif_1_default 00:18:25.688 ************************************ 00:18:25.688 04:33:28 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:18:25.688 04:33:28 -- target/dif.sh@86 -- # create_subsystems 0 00:18:25.688 04:33:28 -- target/dif.sh@28 -- # local sub 00:18:25.688 04:33:28 -- target/dif.sh@30 -- # for sub in "$@" 00:18:25.688 04:33:28 -- target/dif.sh@31 -- # create_subsystem 0 00:18:25.688 04:33:28 -- target/dif.sh@18 -- # local sub_id=0 00:18:25.688 04:33:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:25.688 04:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.688 04:33:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.688 bdev_null0 00:18:25.688 04:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.688 04:33:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:25.688 04:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.688 04:33:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.688 04:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.688 04:33:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:25.688 04:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.688 04:33:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.688 04:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.688 04:33:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:25.688 04:33:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.688 04:33:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.688 [2024-12-07 04:33:28.731574] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.688 04:33:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.688 04:33:28 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:25.688 04:33:28 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:25.688 04:33:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:25.688 04:33:28 -- nvmf/common.sh@520 -- # config=() 00:18:25.688 04:33:28 -- nvmf/common.sh@520 -- # local subsystem config 00:18:25.688 04:33:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:25.688 04:33:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:25.688 04:33:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:25.688 { 00:18:25.688 "params": { 00:18:25.688 "name": "Nvme$subsystem", 00:18:25.688 "trtype": "$TEST_TRANSPORT", 00:18:25.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:25.688 "adrfam": "ipv4", 00:18:25.688 "trsvcid": "$NVMF_PORT", 00:18:25.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:25.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:25.688 "hdgst": ${hdgst:-false}, 00:18:25.688 "ddgst": ${ddgst:-false} 00:18:25.688 }, 00:18:25.688 "method": "bdev_nvme_attach_controller" 00:18:25.688 } 00:18:25.688 EOF 00:18:25.688 )") 00:18:25.688 04:33:28 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:25.688 04:33:28 -- target/dif.sh@82 -- # gen_fio_conf 00:18:25.688 04:33:28 -- target/dif.sh@54 -- # local file 00:18:25.688 04:33:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:25.688 04:33:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.688 04:33:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:25.688 04:33:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.688 04:33:28 -- common/autotest_common.sh@1330 -- # shift 00:18:25.688 04:33:28 -- nvmf/common.sh@542 -- # cat 00:18:25.688 04:33:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:25.688 04:33:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.688 04:33:28 -- target/dif.sh@56 -- # cat 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.688 04:33:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:25.688 04:33:28 -- target/dif.sh@72 -- # (( file <= files )) 00:18:25.688 04:33:28 -- nvmf/common.sh@544 -- # jq . 00:18:25.688 04:33:28 -- nvmf/common.sh@545 -- # IFS=, 00:18:25.688 04:33:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:25.688 "params": { 00:18:25.688 "name": "Nvme0", 00:18:25.688 "trtype": "tcp", 00:18:25.688 "traddr": "10.0.0.2", 00:18:25.688 "adrfam": "ipv4", 00:18:25.688 "trsvcid": "4420", 00:18:25.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:25.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:25.688 "hdgst": false, 00:18:25.688 "ddgst": false 00:18:25.688 }, 00:18:25.688 "method": "bdev_nvme_attach_controller" 00:18:25.688 }' 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:25.688 04:33:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:25.688 04:33:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:25.688 04:33:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:25.688 04:33:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:25.688 04:33:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.688 04:33:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:25.946 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:25.946 fio-3.35 00:18:25.946 Starting 1 thread 00:18:26.205 [2024-12-07 04:33:29.300551] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:26.205 [2024-12-07 04:33:29.300627] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:38.424 00:18:38.424 filename0: (groupid=0, jobs=1): err= 0: pid=74547: Sat Dec 7 04:33:39 2024 00:18:38.424 read: IOPS=9561, BW=37.4MiB/s (39.2MB/s)(374MiB/10001msec) 00:18:38.424 slat (nsec): min=5833, max=70987, avg=8091.69, stdev=3507.28 00:18:38.424 clat (usec): min=319, max=4795, avg=394.61, stdev=52.86 00:18:38.424 lat (usec): min=325, max=4822, avg=402.71, stdev=53.62 00:18:38.424 clat percentiles (usec): 00:18:38.424 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:18:38.424 | 30.00th=[ 367], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 396], 00:18:38.424 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 453], 95.00th=[ 478], 00:18:38.424 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 553], 99.95th=[ 570], 00:18:38.424 | 99.99th=[ 1991] 00:18:38.424 bw ( KiB/s): min=36480, max=40608, per=99.83%, avg=38181.95, stdev=918.29, samples=19 00:18:38.424 iops : min= 9120, max=10152, avg=9545.47, stdev=229.58, samples=19 00:18:38.424 lat (usec) : 500=98.05%, 750=1.93% 00:18:38.424 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:18:38.424 cpu : usr=85.37%, sys=12.75%, ctx=12, majf=0, minf=9 00:18:38.424 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:38.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.424 issued rwts: total=95628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.424 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:38.424 00:18:38.424 Run status group 0 (all jobs): 00:18:38.424 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=374MiB (392MB), run=10001-10001msec 00:18:38.424 04:33:39 -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:38.424 04:33:39 -- target/dif.sh@43 -- # local sub 00:18:38.424 04:33:39 -- target/dif.sh@45 -- # for sub in "$@" 00:18:38.424 04:33:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:38.424 04:33:39 -- target/dif.sh@36 -- # local sub_id=0 00:18:38.424 04:33:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 ************************************ 00:18:38.424 END TEST fio_dif_1_default 00:18:38.424 ************************************ 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 00:18:38.424 real 0m10.906s 00:18:38.424 user 0m9.121s 00:18:38.424 sys 0m1.513s 00:18:38.424 04:33:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 04:33:39 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:38.424 04:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:38.424 04:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 ************************************ 00:18:38.424 START TEST fio_dif_1_multi_subsystems 00:18:38.424 ************************************ 00:18:38.424 04:33:39 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:18:38.424 04:33:39 -- target/dif.sh@92 -- # local files=1 00:18:38.424 04:33:39 -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:38.424 04:33:39 -- target/dif.sh@28 -- # local sub 00:18:38.424 04:33:39 -- target/dif.sh@30 -- # for sub in "$@" 00:18:38.424 04:33:39 -- target/dif.sh@31 -- # create_subsystem 0 00:18:38.424 04:33:39 -- target/dif.sh@18 -- # local sub_id=0 00:18:38.424 04:33:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 bdev_null0 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 [2024-12-07 04:33:39.699860] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@30 -- # for sub in "$@" 00:18:38.424 04:33:39 -- target/dif.sh@31 -- # create_subsystem 1 00:18:38.424 04:33:39 -- target/dif.sh@18 -- # local sub_id=1 00:18:38.424 04:33:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 bdev_null1 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.424 04:33:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.424 04:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:38.424 04:33:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.424 04:33:39 -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:38.424 04:33:39 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:38.424 04:33:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:38.424 04:33:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.424 04:33:39 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.424 04:33:39 -- nvmf/common.sh@520 -- # config=() 00:18:38.424 04:33:39 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:38.424 04:33:39 -- nvmf/common.sh@520 -- # local subsystem config 00:18:38.424 04:33:39 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.424 04:33:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:38.424 04:33:39 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:38.424 04:33:39 -- target/dif.sh@82 -- # gen_fio_conf 00:18:38.424 04:33:39 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.424 04:33:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:38.424 { 00:18:38.424 "params": { 00:18:38.424 "name": "Nvme$subsystem", 00:18:38.424 "trtype": "$TEST_TRANSPORT", 00:18:38.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.424 "adrfam": "ipv4", 00:18:38.424 "trsvcid": "$NVMF_PORT", 00:18:38.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.424 "hdgst": ${hdgst:-false}, 00:18:38.424 "ddgst": ${ddgst:-false} 00:18:38.424 }, 00:18:38.424 "method": "bdev_nvme_attach_controller" 00:18:38.424 } 00:18:38.424 EOF 00:18:38.424 )") 00:18:38.424 04:33:39 -- common/autotest_common.sh@1330 -- # shift 00:18:38.424 04:33:39 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:38.424 04:33:39 -- target/dif.sh@54 -- # local file 00:18:38.424 04:33:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.424 04:33:39 -- target/dif.sh@56 -- # cat 00:18:38.424 04:33:39 -- nvmf/common.sh@542 -- # cat 00:18:38.424 04:33:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.424 04:33:39 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:38.424 04:33:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:38.424 04:33:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:38.424 04:33:39 -- target/dif.sh@72 -- # (( file <= files )) 00:18:38.424 04:33:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:38.424 04:33:39 -- target/dif.sh@73 -- # cat 00:18:38.424 04:33:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:38.424 { 00:18:38.424 "params": { 00:18:38.424 "name": "Nvme$subsystem", 00:18:38.424 "trtype": "$TEST_TRANSPORT", 00:18:38.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.424 "adrfam": "ipv4", 00:18:38.424 "trsvcid": "$NVMF_PORT", 00:18:38.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.424 "hdgst": ${hdgst:-false}, 00:18:38.424 "ddgst": ${ddgst:-false} 00:18:38.424 }, 00:18:38.424 "method": "bdev_nvme_attach_controller" 00:18:38.424 } 00:18:38.424 EOF 00:18:38.424 )") 00:18:38.424 04:33:39 -- nvmf/common.sh@542 -- # cat 00:18:38.425 04:33:39 -- target/dif.sh@72 -- # (( file++ )) 00:18:38.425 04:33:39 -- target/dif.sh@72 -- # (( file <= files )) 00:18:38.425 04:33:39 -- nvmf/common.sh@544 -- # jq . 00:18:38.425 04:33:39 -- nvmf/common.sh@545 -- # IFS=, 00:18:38.425 04:33:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:38.425 "params": { 00:18:38.425 "name": "Nvme0", 00:18:38.425 "trtype": "tcp", 00:18:38.425 "traddr": "10.0.0.2", 00:18:38.425 "adrfam": "ipv4", 00:18:38.425 "trsvcid": "4420", 00:18:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:38.425 "hdgst": false, 00:18:38.425 "ddgst": false 00:18:38.425 }, 00:18:38.425 "method": "bdev_nvme_attach_controller" 00:18:38.425 },{ 00:18:38.425 "params": { 00:18:38.425 "name": "Nvme1", 00:18:38.425 "trtype": "tcp", 00:18:38.425 "traddr": "10.0.0.2", 00:18:38.425 "adrfam": "ipv4", 00:18:38.425 "trsvcid": "4420", 00:18:38.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.425 "hdgst": false, 00:18:38.425 "ddgst": false 00:18:38.425 }, 00:18:38.425 "method": "bdev_nvme_attach_controller" 00:18:38.425 }' 00:18:38.425 04:33:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:38.425 04:33:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:38.425 04:33:39 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.425 04:33:39 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:38.425 04:33:39 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.425 04:33:39 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:38.425 04:33:39 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:38.425 04:33:39 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:38.425 04:33:39 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.425 04:33:39 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.425 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:38.425 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:38.425 fio-3.35 00:18:38.425 Starting 2 threads 00:18:38.425 [2024-12-07 04:33:40.359466] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:38.425 [2024-12-07 04:33:40.359786] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:48.399 00:18:48.399 filename0: (groupid=0, jobs=1): err= 0: pid=74707: Sat Dec 7 04:33:50 2024 00:18:48.399 read: IOPS=5133, BW=20.1MiB/s (21.0MB/s)(201MiB/10001msec) 00:18:48.399 slat (nsec): min=6314, max=78752, avg=13283.62, stdev=4893.61 00:18:48.399 clat (usec): min=562, max=1338, avg=743.88, stdev=60.56 00:18:48.399 lat (usec): min=575, max=1362, avg=757.16, stdev=61.27 00:18:48.399 clat percentiles (usec): 00:18:48.399 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 693], 00:18:48.399 | 30.00th=[ 709], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:18:48.399 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 848], 00:18:48.399 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 963], 00:18:48.399 | 99.99th=[ 996] 00:18:48.399 bw ( KiB/s): min=19872, max=21760, per=50.10%, avg=20576.42, stdev=523.68, samples=19 00:18:48.399 iops : min= 4968, max= 5440, avg=5144.11, stdev=130.92, samples=19 00:18:48.399 lat (usec) : 750=58.04%, 1000=41.95% 00:18:48.399 lat (msec) : 2=0.01% 00:18:48.399 cpu : usr=89.95%, sys=8.64%, ctx=54, majf=0, minf=0 00:18:48.399 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.399 issued rwts: total=51344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.399 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:48.399 filename1: (groupid=0, jobs=1): err= 0: pid=74708: Sat Dec 7 04:33:50 2024 00:18:48.399 read: IOPS=5133, BW=20.1MiB/s (21.0MB/s)(201MiB/10001msec) 00:18:48.399 slat (nsec): min=6316, max=72463, avg=13739.59, stdev=5134.07 00:18:48.399 clat (usec): min=599, max=1426, avg=740.66, stdev=58.24 00:18:48.399 lat (usec): min=605, max=1455, avg=754.39, stdev=59.22 00:18:48.399 clat percentiles (usec): 00:18:48.399 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 693], 00:18:48.399 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:18:48.399 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 848], 00:18:48.399 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 955], 00:18:48.399 | 99.99th=[ 1004] 00:18:48.399 bw ( KiB/s): min=19872, max=21760, per=50.10%, avg=20576.42, stdev=523.68, samples=19 00:18:48.399 iops : min= 4968, max= 5440, avg=5144.11, stdev=130.92, samples=19 00:18:48.399 lat (usec) : 750=60.32%, 1000=39.67% 00:18:48.399 lat (msec) : 2=0.01% 00:18:48.399 cpu : usr=90.07%, sys=8.44%, ctx=27, majf=0, minf=0 00:18:48.399 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.399 issued rwts: total=51344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.399 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:48.399 00:18:48.399 Run status group 0 (all jobs): 00:18:48.399 READ: bw=40.1MiB/s (42.1MB/s), 20.1MiB/s-20.1MiB/s (21.0MB/s-21.0MB/s), io=401MiB (421MB), run=10001-10001msec 00:18:48.399 04:33:50 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:48.399 04:33:50 -- target/dif.sh@43 -- # local sub 00:18:48.399 04:33:50 -- target/dif.sh@45 -- # for sub in "$@" 00:18:48.399 04:33:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:48.399 04:33:50 -- target/dif.sh@36 -- # local sub_id=0 00:18:48.399 04:33:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.399 04:33:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.399 04:33:50 -- target/dif.sh@45 -- # for sub in "$@" 00:18:48.399 04:33:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:48.399 04:33:50 -- target/dif.sh@36 -- # local sub_id=1 00:18:48.399 04:33:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.399 04:33:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.399 00:18:48.399 real 0m11.032s 00:18:48.399 user 0m18.673s 00:18:48.399 sys 0m1.963s 00:18:48.399 04:33:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:48.399 ************************************ 00:18:48.399 END TEST fio_dif_1_multi_subsystems 00:18:48.399 ************************************ 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 04:33:50 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:48.399 04:33:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:48.399 04:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 ************************************ 00:18:48.399 START TEST fio_dif_rand_params 00:18:48.399 ************************************ 00:18:48.399 04:33:50 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:18:48.399 04:33:50 -- target/dif.sh@100 -- # local NULL_DIF 00:18:48.399 04:33:50 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:48.399 04:33:50 -- target/dif.sh@103 -- # NULL_DIF=3 00:18:48.399 04:33:50 -- target/dif.sh@103 -- # bs=128k 00:18:48.399 04:33:50 -- target/dif.sh@103 -- # numjobs=3 00:18:48.399 04:33:50 -- target/dif.sh@103 -- # iodepth=3 00:18:48.399 04:33:50 -- target/dif.sh@103 -- # runtime=5 00:18:48.399 04:33:50 -- target/dif.sh@105 -- # create_subsystems 0 00:18:48.399 04:33:50 -- target/dif.sh@28 -- # local sub 00:18:48.399 04:33:50 -- target/dif.sh@30 -- # for sub in "$@" 00:18:48.399 04:33:50 -- target/dif.sh@31 -- # create_subsystem 0 00:18:48.399 04:33:50 -- target/dif.sh@18 -- # local sub_id=0 00:18:48.399 04:33:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 bdev_null0 00:18:48.399 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.399 04:33:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.399 04:33:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.399 04:33:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:48.399 04:33:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.399 04:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:48.399 [2024-12-07 04:33:50.778566] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.400 04:33:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.400 04:33:50 -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:48.400 04:33:50 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:48.400 04:33:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:48.400 04:33:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:48.400 04:33:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:48.400 04:33:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:48.400 04:33:50 -- target/dif.sh@82 -- # gen_fio_conf 00:18:48.400 04:33:50 -- nvmf/common.sh@520 -- # config=() 00:18:48.400 04:33:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:48.400 04:33:50 -- target/dif.sh@54 -- # local file 00:18:48.400 04:33:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:48.400 04:33:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:48.400 04:33:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.400 04:33:50 -- target/dif.sh@56 -- # cat 00:18:48.400 04:33:50 -- common/autotest_common.sh@1330 -- # shift 00:18:48.400 04:33:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:48.400 04:33:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:48.400 04:33:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:48.400 { 00:18:48.400 "params": { 00:18:48.400 "name": "Nvme$subsystem", 00:18:48.400 "trtype": "$TEST_TRANSPORT", 00:18:48.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.400 "adrfam": "ipv4", 00:18:48.400 "trsvcid": "$NVMF_PORT", 00:18:48.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.400 "hdgst": ${hdgst:-false}, 00:18:48.400 "ddgst": ${ddgst:-false} 00:18:48.400 }, 00:18:48.400 "method": "bdev_nvme_attach_controller" 00:18:48.400 } 00:18:48.400 EOF 00:18:48.400 )") 00:18:48.400 04:33:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.400 04:33:50 -- nvmf/common.sh@542 -- # cat 00:18:48.400 04:33:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:48.400 04:33:50 -- target/dif.sh@72 -- # (( file <= files )) 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:48.400 04:33:50 -- nvmf/common.sh@544 -- # jq . 00:18:48.400 04:33:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:48.400 04:33:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:48.400 "params": { 00:18:48.400 "name": "Nvme0", 00:18:48.400 "trtype": "tcp", 00:18:48.400 "traddr": "10.0.0.2", 00:18:48.400 "adrfam": "ipv4", 00:18:48.400 "trsvcid": "4420", 00:18:48.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:48.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:48.400 "hdgst": false, 00:18:48.400 "ddgst": false 00:18:48.400 }, 00:18:48.400 "method": "bdev_nvme_attach_controller" 00:18:48.400 }' 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:48.400 04:33:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:48.400 04:33:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:48.400 04:33:50 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:48.400 04:33:50 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:48.400 04:33:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:48.400 04:33:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:48.400 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:48.400 ... 00:18:48.400 fio-3.35 00:18:48.400 Starting 3 threads 00:18:48.400 [2024-12-07 04:33:51.334214] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:48.400 [2024-12-07 04:33:51.334528] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:53.665 00:18:53.665 filename0: (groupid=0, jobs=1): err= 0: pid=74864: Sat Dec 7 04:33:56 2024 00:18:53.665 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5007msec) 00:18:53.665 slat (nsec): min=6922, max=65893, avg=16455.58, stdev=5824.76 00:18:53.665 clat (usec): min=10237, max=14529, avg=11139.05, stdev=473.78 00:18:53.665 lat (usec): min=10255, max=14553, avg=11155.51, stdev=474.41 00:18:53.665 clat percentiles (usec): 00:18:53.665 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:18:53.665 | 30.00th=[10814], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:18:53.665 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:18:53.665 | 99.00th=[12125], 99.50th=[12256], 99.90th=[14484], 99.95th=[14484], 00:18:53.665 | 99.99th=[14484] 00:18:53.665 bw ( KiB/s): min=33792, max=35328, per=33.32%, avg=34329.60, stdev=632.27, samples=10 00:18:53.665 iops : min= 264, max= 276, avg=268.20, stdev= 4.94, samples=10 00:18:53.665 lat (msec) : 20=100.00% 00:18:53.665 cpu : usr=91.51%, sys=7.89%, ctx=61, majf=0, minf=9 00:18:53.665 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.665 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.665 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:53.665 filename0: (groupid=0, jobs=1): err= 0: pid=74865: Sat Dec 7 04:33:56 2024 00:18:53.665 read: IOPS=268, BW=33.5MiB/s (35.2MB/s)(168MiB/5009msec) 00:18:53.665 slat (nsec): min=6543, max=56245, avg=15642.23, stdev=6334.51 00:18:53.665 clat (usec): min=10261, max=16210, avg=11146.13, stdev=505.89 00:18:53.665 lat (usec): min=10270, max=16266, avg=11161.77, stdev=506.54 00:18:53.665 clat percentiles (usec): 00:18:53.665 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:18:53.665 | 30.00th=[10814], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:18:53.665 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:18:53.665 | 99.00th=[12125], 99.50th=[12387], 99.90th=[16188], 99.95th=[16188], 00:18:53.665 | 99.99th=[16188] 00:18:53.665 bw ( KiB/s): min=33792, max=34560, per=33.32%, avg=34329.60, stdev=370.98, samples=10 00:18:53.665 iops : min= 264, max= 270, avg=268.20, stdev= 2.90, samples=10 00:18:53.665 lat (msec) : 20=100.00% 00:18:53.665 cpu : usr=91.59%, sys=7.79%, ctx=28, majf=0, minf=0 00:18:53.665 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.666 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:53.666 filename0: (groupid=0, jobs=1): err= 0: pid=74866: Sat Dec 7 04:33:56 2024 00:18:53.666 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5006msec) 00:18:53.666 slat (nsec): min=6959, max=51903, avg=16163.83, stdev=5730.62 00:18:53.666 clat (usec): min=10235, max=13010, avg=11137.15, stdev=453.60 00:18:53.666 lat (usec): min=10247, max=13034, avg=11153.31, stdev=454.12 00:18:53.666 clat percentiles (usec): 00:18:53.666 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:18:53.666 | 30.00th=[10814], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:18:53.666 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:18:53.666 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13042], 99.95th=[13042], 00:18:53.666 | 99.99th=[13042] 00:18:53.666 bw ( KiB/s): min=33792, max=35328, per=33.32%, avg=34329.60, stdev=632.27, samples=10 00:18:53.666 iops : min= 264, max= 276, avg=268.20, stdev= 4.94, samples=10 00:18:53.666 lat (msec) : 20=100.00% 00:18:53.666 cpu : usr=91.73%, sys=7.73%, ctx=57, majf=0, minf=9 00:18:53.666 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.666 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:53.666 00:18:53.666 Run status group 0 (all jobs): 00:18:53.666 READ: bw=101MiB/s (106MB/s), 33.5MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=504MiB (528MB), run=5006-5009msec 00:18:53.666 04:33:56 -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:53.666 04:33:56 -- target/dif.sh@43 -- # local sub 00:18:53.666 04:33:56 -- target/dif.sh@45 -- # for sub in "$@" 00:18:53.666 04:33:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:53.666 04:33:56 -- target/dif.sh@36 -- # local sub_id=0 00:18:53.666 04:33:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@109 -- # NULL_DIF=2 00:18:53.666 04:33:56 -- target/dif.sh@109 -- # bs=4k 00:18:53.666 04:33:56 -- target/dif.sh@109 -- # numjobs=8 00:18:53.666 04:33:56 -- target/dif.sh@109 -- # iodepth=16 00:18:53.666 04:33:56 -- target/dif.sh@109 -- # runtime= 00:18:53.666 04:33:56 -- target/dif.sh@109 -- # files=2 00:18:53.666 04:33:56 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:53.666 04:33:56 -- target/dif.sh@28 -- # local sub 00:18:53.666 04:33:56 -- target/dif.sh@30 -- # for sub in "$@" 00:18:53.666 04:33:56 -- target/dif.sh@31 -- # create_subsystem 0 00:18:53.666 04:33:56 -- target/dif.sh@18 -- # local sub_id=0 00:18:53.666 04:33:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 bdev_null0 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 [2024-12-07 04:33:56.671426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@30 -- # for sub in "$@" 00:18:53.666 04:33:56 -- target/dif.sh@31 -- # create_subsystem 1 00:18:53.666 04:33:56 -- target/dif.sh@18 -- # local sub_id=1 00:18:53.666 04:33:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 bdev_null1 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@30 -- # for sub in "$@" 00:18:53.666 04:33:56 -- target/dif.sh@31 -- # create_subsystem 2 00:18:53.666 04:33:56 -- target/dif.sh@18 -- # local sub_id=2 00:18:53.666 04:33:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 bdev_null2 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:53.666 04:33:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.666 04:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 04:33:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.666 04:33:56 -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:53.666 04:33:56 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:53.666 04:33:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:53.666 04:33:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:53.666 04:33:56 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:53.666 04:33:56 -- nvmf/common.sh@520 -- # config=() 00:18:53.666 04:33:56 -- nvmf/common.sh@520 -- # local subsystem config 00:18:53.666 04:33:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:53.666 04:33:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:53.666 04:33:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:53.666 04:33:56 -- target/dif.sh@82 -- # gen_fio_conf 00:18:53.666 04:33:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:53.666 { 00:18:53.666 "params": { 00:18:53.666 "name": "Nvme$subsystem", 00:18:53.666 "trtype": "$TEST_TRANSPORT", 00:18:53.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.666 "adrfam": "ipv4", 00:18:53.666 "trsvcid": "$NVMF_PORT", 00:18:53.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.666 "hdgst": ${hdgst:-false}, 00:18:53.666 "ddgst": ${ddgst:-false} 00:18:53.666 }, 00:18:53.666 "method": "bdev_nvme_attach_controller" 00:18:53.666 } 00:18:53.666 EOF 00:18:53.666 )") 00:18:53.666 04:33:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:53.666 04:33:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.666 04:33:56 -- target/dif.sh@54 -- # local file 00:18:53.666 04:33:56 -- common/autotest_common.sh@1330 -- # shift 00:18:53.666 04:33:56 -- target/dif.sh@56 -- # cat 00:18:53.667 04:33:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:53.667 04:33:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.667 04:33:56 -- nvmf/common.sh@542 -- # cat 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:53.667 04:33:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:53.667 04:33:56 -- target/dif.sh@72 -- # (( file <= files )) 00:18:53.667 04:33:56 -- target/dif.sh@73 -- # cat 00:18:53.667 04:33:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:53.667 04:33:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:53.667 { 00:18:53.667 "params": { 00:18:53.667 "name": "Nvme$subsystem", 00:18:53.667 "trtype": "$TEST_TRANSPORT", 00:18:53.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.667 "adrfam": "ipv4", 00:18:53.667 "trsvcid": "$NVMF_PORT", 00:18:53.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.667 "hdgst": ${hdgst:-false}, 00:18:53.667 "ddgst": ${ddgst:-false} 00:18:53.667 }, 00:18:53.667 "method": "bdev_nvme_attach_controller" 00:18:53.667 } 00:18:53.667 EOF 00:18:53.667 )") 00:18:53.667 04:33:56 -- target/dif.sh@72 -- # (( file++ )) 00:18:53.667 04:33:56 -- target/dif.sh@72 -- # (( file <= files )) 00:18:53.667 04:33:56 -- target/dif.sh@73 -- # cat 00:18:53.667 04:33:56 -- nvmf/common.sh@542 -- # cat 00:18:53.667 04:33:56 -- target/dif.sh@72 -- # (( file++ )) 00:18:53.667 04:33:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:53.667 04:33:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:53.667 { 00:18:53.667 "params": { 00:18:53.667 "name": "Nvme$subsystem", 00:18:53.667 "trtype": "$TEST_TRANSPORT", 00:18:53.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.667 "adrfam": "ipv4", 00:18:53.667 "trsvcid": "$NVMF_PORT", 00:18:53.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.667 "hdgst": ${hdgst:-false}, 00:18:53.667 "ddgst": ${ddgst:-false} 00:18:53.667 }, 00:18:53.667 "method": "bdev_nvme_attach_controller" 00:18:53.667 } 00:18:53.667 EOF 00:18:53.667 )") 00:18:53.667 04:33:56 -- target/dif.sh@72 -- # (( file <= files )) 00:18:53.667 04:33:56 -- nvmf/common.sh@542 -- # cat 00:18:53.667 04:33:56 -- nvmf/common.sh@544 -- # jq . 00:18:53.667 04:33:56 -- nvmf/common.sh@545 -- # IFS=, 00:18:53.667 04:33:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:53.667 "params": { 00:18:53.667 "name": "Nvme0", 00:18:53.667 "trtype": "tcp", 00:18:53.667 "traddr": "10.0.0.2", 00:18:53.667 "adrfam": "ipv4", 00:18:53.667 "trsvcid": "4420", 00:18:53.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:53.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:53.667 "hdgst": false, 00:18:53.667 "ddgst": false 00:18:53.667 }, 00:18:53.667 "method": "bdev_nvme_attach_controller" 00:18:53.667 },{ 00:18:53.667 "params": { 00:18:53.667 "name": "Nvme1", 00:18:53.667 "trtype": "tcp", 00:18:53.667 "traddr": "10.0.0.2", 00:18:53.667 "adrfam": "ipv4", 00:18:53.667 "trsvcid": "4420", 00:18:53.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.667 "hdgst": false, 00:18:53.667 "ddgst": false 00:18:53.667 }, 00:18:53.667 "method": "bdev_nvme_attach_controller" 00:18:53.667 },{ 00:18:53.667 "params": { 00:18:53.667 "name": "Nvme2", 00:18:53.667 "trtype": "tcp", 00:18:53.667 "traddr": "10.0.0.2", 00:18:53.667 "adrfam": "ipv4", 00:18:53.667 "trsvcid": "4420", 00:18:53.667 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:53.667 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:53.667 "hdgst": false, 00:18:53.667 "ddgst": false 00:18:53.667 }, 00:18:53.667 "method": "bdev_nvme_attach_controller" 00:18:53.667 }' 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:53.667 04:33:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:53.667 04:33:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:53.667 04:33:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:53.667 04:33:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:53.667 04:33:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:53.667 04:33:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:53.927 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:53.927 ... 00:18:53.927 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:53.927 ... 00:18:53.927 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:53.927 ... 00:18:53.927 fio-3.35 00:18:53.927 Starting 24 threads 00:18:54.495 [2024-12-07 04:33:57.445760] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:54.495 [2024-12-07 04:33:57.445836] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:04.490 00:19:04.490 filename0: (groupid=0, jobs=1): err= 0: pid=74966: Sat Dec 7 04:34:07 2024 00:19:04.490 read: IOPS=223, BW=894KiB/s (916kB/s)(8948KiB/10007msec) 00:19:04.490 slat (usec): min=4, max=8030, avg=21.70, stdev=196.24 00:19:04.490 clat (msec): min=7, max=143, avg=71.46, stdev=20.63 00:19:04.490 lat (msec): min=7, max=143, avg=71.48, stdev=20.63 00:19:04.490 clat percentiles (msec): 00:19:04.490 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:19:04.490 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:19:04.490 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 101], 95.00th=[ 108], 00:19:04.490 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 144], 99.95th=[ 144], 00:19:04.490 | 99.99th=[ 144] 00:19:04.490 bw ( KiB/s): min= 640, max= 1080, per=4.16%, avg=888.45, stdev=133.88, samples=20 00:19:04.490 iops : min= 160, max= 270, avg=222.10, stdev=33.47, samples=20 00:19:04.490 lat (msec) : 10=0.31%, 20=0.27%, 50=21.50%, 100=67.64%, 250=10.28% 00:19:04.490 cpu : usr=36.53%, sys=1.95%, ctx=1312, majf=0, minf=9 00:19:04.490 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=78.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:04.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename0: (groupid=0, jobs=1): err= 0: pid=74967: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=219, BW=878KiB/s (899kB/s)(8804KiB/10030msec) 00:19:04.491 slat (usec): min=3, max=8030, avg=33.00, stdev=381.57 00:19:04.491 clat (msec): min=29, max=128, avg=72.77, stdev=19.40 00:19:04.491 lat (msec): min=29, max=128, avg=72.80, stdev=19.40 00:19:04.491 clat percentiles (msec): 00:19:04.491 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:19:04.491 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 72], 00:19:04.491 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 108], 00:19:04.491 | 99.00th=[ 118], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 129], 00:19:04.491 | 99.99th=[ 129] 00:19:04.491 bw ( KiB/s): min= 640, max= 1138, per=4.10%, avg=874.10, stdev=125.24, samples=20 00:19:04.491 iops : min= 160, max= 284, avg=218.50, stdev=31.25, samples=20 00:19:04.491 lat (msec) : 50=14.45%, 100=74.83%, 250=10.72% 00:19:04.491 cpu : usr=35.96%, sys=1.82%, ctx=1027, majf=0, minf=9 00:19:04.491 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=76.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:04.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename0: (groupid=0, jobs=1): err= 0: pid=74968: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=226, BW=906KiB/s (928kB/s)(9076KiB/10018msec) 00:19:04.491 slat (usec): min=4, max=7039, avg=17.94, stdev=147.56 00:19:04.491 clat (msec): min=18, max=138, avg=70.56, stdev=20.56 00:19:04.491 lat (msec): min=18, max=138, avg=70.58, stdev=20.57 00:19:04.491 clat percentiles (msec): 00:19:04.491 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 48], 00:19:04.491 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:19:04.491 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 108], 00:19:04.491 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 131], 00:19:04.491 | 99.99th=[ 140] 00:19:04.491 bw ( KiB/s): min= 641, max= 1128, per=4.22%, avg=901.25, stdev=143.07, samples=20 00:19:04.491 iops : min= 160, max= 282, avg=225.30, stdev=35.79, samples=20 00:19:04.491 lat (msec) : 20=0.26%, 50=23.67%, 100=66.64%, 250=9.43% 00:19:04.491 cpu : usr=31.84%, sys=1.82%, ctx=903, majf=0, minf=9 00:19:04.491 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:04.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename0: (groupid=0, jobs=1): err= 0: pid=74969: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=226, BW=905KiB/s (927kB/s)(9056KiB/10005msec) 00:19:04.491 slat (usec): min=3, max=4027, avg=19.99, stdev=126.66 00:19:04.491 clat (msec): min=7, max=129, avg=70.60, stdev=20.33 00:19:04.491 lat (msec): min=7, max=129, avg=70.62, stdev=20.33 00:19:04.491 clat percentiles (msec): 00:19:04.491 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 49], 00:19:04.491 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:19:04.491 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 102], 95.00th=[ 108], 00:19:04.491 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 130], 99.95th=[ 130], 00:19:04.491 | 99.99th=[ 130] 00:19:04.491 bw ( KiB/s): min= 640, max= 1072, per=4.17%, avg=890.11, stdev=131.53, samples=19 00:19:04.491 iops : min= 160, max= 268, avg=222.53, stdev=32.88, samples=19 00:19:04.491 lat (msec) : 10=0.27%, 20=0.31%, 50=20.98%, 100=68.42%, 250=10.03% 00:19:04.491 cpu : usr=41.25%, sys=2.30%, ctx=1051, majf=0, minf=9 00:19:04.491 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:19:04.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename0: (groupid=0, jobs=1): err= 0: pid=74970: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=218, BW=876KiB/s (897kB/s)(8792KiB/10037msec) 00:19:04.491 slat (usec): min=4, max=8047, avg=17.71, stdev=171.45 00:19:04.491 clat (msec): min=10, max=143, avg=72.92, stdev=21.20 00:19:04.491 lat (msec): min=10, max=143, avg=72.94, stdev=21.20 00:19:04.491 clat percentiles (msec): 00:19:04.491 | 1.00th=[ 12], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:19:04.491 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:19:04.491 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 108], 00:19:04.491 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 133], 00:19:04.491 | 99.99th=[ 144] 00:19:04.491 bw ( KiB/s): min= 640, max= 1277, per=4.09%, avg=872.65, stdev=158.20, samples=20 00:19:04.491 iops : min= 160, max= 319, avg=218.15, stdev=39.52, samples=20 00:19:04.491 lat (msec) : 20=1.36%, 50=15.01%, 100=73.16%, 250=10.46% 00:19:04.491 cpu : usr=34.92%, sys=1.87%, ctx=1133, majf=0, minf=9 00:19:04.491 IO depths : 1=0.1%, 2=1.0%, 4=4.3%, 8=78.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:04.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename0: (groupid=0, jobs=1): err= 0: pid=74971: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=208, BW=834KiB/s (854kB/s)(8352KiB/10012msec) 00:19:04.491 slat (usec): min=4, max=8026, avg=21.96, stdev=215.48 00:19:04.491 clat (msec): min=15, max=164, avg=76.59, stdev=25.65 00:19:04.491 lat (msec): min=15, max=164, avg=76.62, stdev=25.64 00:19:04.491 clat percentiles (msec): 00:19:04.491 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:19:04.491 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:19:04.491 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 132], 00:19:04.491 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 165], 00:19:04.491 | 99.99th=[ 165] 00:19:04.491 bw ( KiB/s): min= 496, max= 1056, per=3.89%, avg=829.65, stdev=193.29, samples=20 00:19:04.491 iops : min= 124, max= 264, avg=207.50, stdev=48.21, samples=20 00:19:04.491 lat (msec) : 20=0.29%, 50=17.96%, 100=64.70%, 250=17.05% 00:19:04.491 cpu : usr=34.29%, sys=1.84%, ctx=1023, majf=0, minf=9 00:19:04.491 IO depths : 1=0.1%, 2=2.6%, 4=10.8%, 8=72.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:19:04.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=90.2%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename0: (groupid=0, jobs=1): err= 0: pid=74972: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=235, BW=941KiB/s (964kB/s)(9412KiB/10002msec) 00:19:04.491 slat (usec): min=7, max=8027, avg=27.82, stdev=330.15 00:19:04.491 clat (usec): min=1363, max=128452, avg=67881.19, stdev=23645.75 00:19:04.491 lat (usec): min=1371, max=128467, avg=67909.01, stdev=23646.75 00:19:04.491 clat percentiles (msec): 00:19:04.491 | 1.00th=[ 3], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:19:04.491 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:19:04.491 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 108], 00:19:04.491 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 126], 99.95th=[ 129], 00:19:04.491 | 99.99th=[ 129] 00:19:04.491 bw ( KiB/s): min= 640, max= 1048, per=4.21%, avg=898.95, stdev=137.90, samples=19 00:19:04.491 iops : min= 160, max= 262, avg=224.74, stdev=34.48, samples=19 00:19:04.491 lat (msec) : 2=0.68%, 4=2.04%, 10=0.85%, 20=0.25%, 50=22.40% 00:19:04.491 lat (msec) : 100=64.17%, 250=9.60% 00:19:04.491 cpu : usr=33.25%, sys=1.96%, ctx=954, majf=0, minf=9 00:19:04.491 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:04.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename0: (groupid=0, jobs=1): err= 0: pid=74973: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=220, BW=884KiB/s (905kB/s)(8844KiB/10007msec) 00:19:04.491 slat (usec): min=3, max=8025, avg=30.10, stdev=294.93 00:19:04.491 clat (msec): min=6, max=141, avg=72.26, stdev=21.12 00:19:04.491 lat (msec): min=6, max=141, avg=72.29, stdev=21.12 00:19:04.491 clat percentiles (msec): 00:19:04.491 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 52], 00:19:04.491 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:19:04.491 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 103], 95.00th=[ 108], 00:19:04.491 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:19:04.491 | 99.99th=[ 142] 00:19:04.491 bw ( KiB/s): min= 640, max= 1128, per=4.12%, avg=880.85, stdev=153.43, samples=20 00:19:04.491 iops : min= 160, max= 282, avg=220.20, stdev=38.38, samples=20 00:19:04.491 lat (msec) : 10=0.27%, 20=0.14%, 50=18.27%, 100=70.47%, 250=10.85% 00:19:04.491 cpu : usr=43.62%, sys=2.54%, ctx=1324, majf=0, minf=10 00:19:04.491 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.7%, 16=14.9%, 32=0.0%, >=64=0.0% 00:19:04.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 complete : 0=0.0%, 4=88.7%, 8=9.8%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.491 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.491 filename1: (groupid=0, jobs=1): err= 0: pid=74974: Sat Dec 7 04:34:07 2024 00:19:04.491 read: IOPS=230, BW=921KiB/s (944kB/s)(9240KiB/10028msec) 00:19:04.491 slat (usec): min=3, max=8037, avg=23.67, stdev=250.18 00:19:04.491 clat (msec): min=24, max=120, avg=69.28, stdev=18.77 00:19:04.491 lat (msec): min=24, max=120, avg=69.30, stdev=18.77 00:19:04.491 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 50], 00:19:04.492 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:19:04.492 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 105], 00:19:04.492 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 118], 00:19:04.492 | 99.99th=[ 122] 00:19:04.492 bw ( KiB/s): min= 688, max= 1080, per=4.31%, avg=920.10, stdev=110.94, samples=20 00:19:04.492 iops : min= 172, max= 270, avg=230.00, stdev=27.73, samples=20 00:19:04.492 lat (msec) : 50=20.69%, 100=70.82%, 250=8.48% 00:19:04.492 cpu : usr=40.68%, sys=2.29%, ctx=1181, majf=0, minf=9 00:19:04.492 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.492 filename1: (groupid=0, jobs=1): err= 0: pid=74975: Sat Dec 7 04:34:07 2024 00:19:04.492 read: IOPS=223, BW=895KiB/s (916kB/s)(8976KiB/10031msec) 00:19:04.492 slat (usec): min=5, max=8025, avg=19.33, stdev=189.17 00:19:04.492 clat (msec): min=25, max=127, avg=71.36, stdev=19.39 00:19:04.492 lat (msec): min=25, max=127, avg=71.38, stdev=19.40 00:19:04.492 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:19:04.492 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:19:04.492 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 108], 00:19:04.492 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 123], 99.95th=[ 127], 00:19:04.492 | 99.99th=[ 128] 00:19:04.492 bw ( KiB/s): min= 640, max= 1021, per=4.18%, avg=891.05, stdev=115.76, samples=20 00:19:04.492 iops : min= 160, max= 255, avg=222.75, stdev=28.93, samples=20 00:19:04.492 lat (msec) : 50=17.29%, 100=72.77%, 250=9.94% 00:19:04.492 cpu : usr=38.26%, sys=2.41%, ctx=1079, majf=0, minf=9 00:19:04.492 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.492 filename1: (groupid=0, jobs=1): err= 0: pid=74976: Sat Dec 7 04:34:07 2024 00:19:04.492 read: IOPS=223, BW=896KiB/s (917kB/s)(8972KiB/10017msec) 00:19:04.492 slat (usec): min=5, max=8024, avg=18.75, stdev=169.19 00:19:04.492 clat (msec): min=24, max=144, avg=71.33, stdev=20.64 00:19:04.492 lat (msec): min=24, max=144, avg=71.34, stdev=20.64 00:19:04.492 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:19:04.492 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:19:04.492 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 108], 00:19:04.492 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:19:04.492 | 99.99th=[ 144] 00:19:04.492 bw ( KiB/s): min= 664, max= 1040, per=4.19%, avg=893.20, stdev=127.64, samples=20 00:19:04.492 iops : min= 166, max= 260, avg=223.30, stdev=31.91, samples=20 00:19:04.492 lat (msec) : 50=20.11%, 100=70.31%, 250=9.59% 00:19:04.492 cpu : usr=35.80%, sys=2.05%, ctx=1054, majf=0, minf=9 00:19:04.492 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.492 filename1: (groupid=0, jobs=1): err= 0: pid=74977: Sat Dec 7 04:34:07 2024 00:19:04.492 read: IOPS=222, BW=891KiB/s (912kB/s)(8940KiB/10039msec) 00:19:04.492 slat (usec): min=3, max=8020, avg=18.37, stdev=189.49 00:19:04.492 clat (msec): min=5, max=156, avg=71.74, stdev=21.79 00:19:04.492 lat (msec): min=5, max=156, avg=71.76, stdev=21.79 00:19:04.492 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 9], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 55], 00:19:04.492 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 74], 00:19:04.492 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 103], 95.00th=[ 109], 00:19:04.492 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 140], 99.95th=[ 144], 00:19:04.492 | 99.99th=[ 157] 00:19:04.492 bw ( KiB/s): min= 632, max= 1383, per=4.16%, avg=887.15, stdev=174.66, samples=20 00:19:04.492 iops : min= 158, max= 345, avg=221.75, stdev=43.55, samples=20 00:19:04.492 lat (msec) : 10=1.34%, 20=0.81%, 50=14.05%, 100=72.04%, 250=11.77% 00:19:04.492 cpu : usr=38.48%, sys=1.94%, ctx=1306, majf=0, minf=0 00:19:04.492 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.492 filename1: (groupid=0, jobs=1): err= 0: pid=74978: Sat Dec 7 04:34:07 2024 00:19:04.492 read: IOPS=230, BW=921KiB/s (943kB/s)(9244KiB/10035msec) 00:19:04.492 slat (usec): min=7, max=4046, avg=18.82, stdev=118.39 00:19:04.492 clat (msec): min=9, max=130, avg=69.30, stdev=20.33 00:19:04.492 lat (msec): min=9, max=130, avg=69.32, stdev=20.33 00:19:04.492 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 31], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:19:04.492 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:19:04.492 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 108], 00:19:04.492 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 121], 99.95th=[ 131], 00:19:04.492 | 99.99th=[ 131] 00:19:04.492 bw ( KiB/s): min= 664, max= 1240, per=4.31%, avg=920.40, stdev=133.22, samples=20 00:19:04.492 iops : min= 166, max= 310, avg=230.10, stdev=33.31, samples=20 00:19:04.492 lat (msec) : 10=0.61%, 20=0.09%, 50=19.90%, 100=69.32%, 250=10.08% 00:19:04.492 cpu : usr=42.60%, sys=2.27%, ctx=1330, majf=0, minf=0 00:19:04.492 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.492 filename1: (groupid=0, jobs=1): err= 0: pid=74979: Sat Dec 7 04:34:07 2024 00:19:04.492 read: IOPS=221, BW=886KiB/s (907kB/s)(8880KiB/10028msec) 00:19:04.492 slat (usec): min=3, max=8028, avg=18.22, stdev=170.16 00:19:04.492 clat (msec): min=34, max=120, avg=72.13, stdev=19.01 00:19:04.492 lat (msec): min=34, max=120, avg=72.15, stdev=19.01 00:19:04.492 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:19:04.492 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:19:04.492 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 97], 95.00th=[ 108], 00:19:04.492 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:19:04.492 | 99.99th=[ 121] 00:19:04.492 bw ( KiB/s): min= 664, max= 1048, per=4.14%, avg=883.80, stdev=111.45, samples=20 00:19:04.492 iops : min= 166, max= 262, avg=220.95, stdev=27.86, samples=20 00:19:04.492 lat (msec) : 50=18.33%, 100=73.60%, 250=8.06% 00:19:04.492 cpu : usr=31.45%, sys=1.69%, ctx=863, majf=0, minf=9 00:19:04.492 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.492 filename1: (groupid=0, jobs=1): err= 0: pid=74980: Sat Dec 7 04:34:07 2024 00:19:04.492 read: IOPS=222, BW=891KiB/s (913kB/s)(8936KiB/10027msec) 00:19:04.492 slat (usec): min=3, max=8024, avg=17.31, stdev=169.55 00:19:04.492 clat (msec): min=19, max=131, avg=71.72, stdev=19.66 00:19:04.492 lat (msec): min=19, max=131, avg=71.73, stdev=19.66 00:19:04.492 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 26], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:19:04.492 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:19:04.492 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 100], 95.00th=[ 108], 00:19:04.492 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 121], 00:19:04.492 | 99.99th=[ 132] 00:19:04.492 bw ( KiB/s): min= 592, max= 1208, per=4.15%, avg=886.85, stdev=136.97, samples=20 00:19:04.492 iops : min= 148, max= 302, avg=221.70, stdev=34.25, samples=20 00:19:04.492 lat (msec) : 20=0.63%, 50=17.28%, 100=72.16%, 250=9.94% 00:19:04.492 cpu : usr=31.44%, sys=1.76%, ctx=847, majf=0, minf=9 00:19:04.492 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.492 filename1: (groupid=0, jobs=1): err= 0: pid=74981: Sat Dec 7 04:34:07 2024 00:19:04.492 read: IOPS=221, BW=886KiB/s (907kB/s)(8880KiB/10022msec) 00:19:04.492 slat (usec): min=8, max=8030, avg=25.85, stdev=294.43 00:19:04.492 clat (msec): min=26, max=132, avg=72.08, stdev=20.54 00:19:04.492 lat (msec): min=26, max=132, avg=72.11, stdev=20.54 00:19:04.492 clat percentiles (msec): 00:19:04.492 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:19:04.492 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 72], 00:19:04.492 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 108], 00:19:04.492 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:19:04.492 | 99.99th=[ 132] 00:19:04.492 bw ( KiB/s): min= 640, max= 1056, per=4.14%, avg=883.65, stdev=126.57, samples=20 00:19:04.492 iops : min= 160, max= 264, avg=220.90, stdev=31.65, samples=20 00:19:04.492 lat (msec) : 50=19.37%, 100=70.27%, 250=10.36% 00:19:04.492 cpu : usr=34.84%, sys=1.92%, ctx=1063, majf=0, minf=9 00:19:04.492 IO depths : 1=0.1%, 2=0.9%, 4=3.9%, 8=79.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:04.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.492 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74982: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=222, BW=889KiB/s (910kB/s)(8924KiB/10043msec) 00:19:04.493 slat (usec): min=3, max=8023, avg=21.90, stdev=208.05 00:19:04.493 clat (msec): min=8, max=144, avg=71.87, stdev=21.77 00:19:04.493 lat (msec): min=8, max=144, avg=71.89, stdev=21.77 00:19:04.493 clat percentiles (msec): 00:19:04.493 | 1.00th=[ 10], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:19:04.493 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:19:04.493 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 109], 00:19:04.493 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 136], 99.95th=[ 144], 00:19:04.493 | 99.99th=[ 144] 00:19:04.493 bw ( KiB/s): min= 640, max= 1282, per=4.15%, avg=886.10, stdev=161.03, samples=20 00:19:04.493 iops : min= 160, max= 320, avg=221.50, stdev=40.19, samples=20 00:19:04.493 lat (msec) : 10=1.34%, 20=0.09%, 50=15.51%, 100=71.45%, 250=11.61% 00:19:04.493 cpu : usr=36.27%, sys=2.10%, ctx=1079, majf=0, minf=9 00:19:04.493 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:04.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74983: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=213, BW=852KiB/s (873kB/s)(8548KiB/10031msec) 00:19:04.493 slat (usec): min=7, max=8027, avg=19.66, stdev=185.76 00:19:04.493 clat (msec): min=34, max=135, avg=74.91, stdev=21.74 00:19:04.493 lat (msec): min=34, max=135, avg=74.93, stdev=21.75 00:19:04.493 clat percentiles (msec): 00:19:04.493 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:19:04.493 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:19:04.493 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:19:04.493 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:19:04.493 | 99.99th=[ 136] 00:19:04.493 bw ( KiB/s): min= 600, max= 1024, per=3.98%, avg=850.80, stdev=156.47, samples=20 00:19:04.493 iops : min= 150, max= 256, avg=212.70, stdev=39.12, samples=20 00:19:04.493 lat (msec) : 50=15.96%, 100=68.83%, 250=15.21% 00:19:04.493 cpu : usr=38.58%, sys=2.14%, ctx=1127, majf=0, minf=9 00:19:04.493 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:04.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 complete : 0=0.0%, 4=89.4%, 8=8.9%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74984: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=223, BW=895KiB/s (916kB/s)(8988KiB/10046msec) 00:19:04.493 slat (usec): min=3, max=8025, avg=19.65, stdev=186.31 00:19:04.493 clat (msec): min=9, max=134, avg=71.36, stdev=20.39 00:19:04.493 lat (msec): min=9, max=134, avg=71.38, stdev=20.39 00:19:04.493 clat percentiles (msec): 00:19:04.493 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:19:04.493 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:19:04.493 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 108], 00:19:04.493 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 125], 99.95th=[ 129], 00:19:04.493 | 99.99th=[ 136] 00:19:04.493 bw ( KiB/s): min= 640, max= 1253, per=4.18%, avg=892.25, stdev=143.03, samples=20 00:19:04.493 iops : min= 160, max= 313, avg=223.05, stdev=35.72, samples=20 00:19:04.493 lat (msec) : 10=0.62%, 20=0.71%, 50=14.78%, 100=73.83%, 250=10.06% 00:19:04.493 cpu : usr=37.82%, sys=2.27%, ctx=1257, majf=0, minf=9 00:19:04.493 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=81.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:04.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74985: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=219, BW=879KiB/s (900kB/s)(8796KiB/10008msec) 00:19:04.493 slat (usec): min=7, max=8030, avg=23.36, stdev=256.32 00:19:04.493 clat (msec): min=11, max=155, avg=72.66, stdev=20.79 00:19:04.493 lat (msec): min=11, max=155, avg=72.69, stdev=20.78 00:19:04.493 clat percentiles (msec): 00:19:04.493 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:19:04.493 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:19:04.493 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:19:04.493 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 157], 00:19:04.493 | 99.99th=[ 157] 00:19:04.493 bw ( KiB/s): min= 528, max= 1024, per=4.09%, avg=873.20, stdev=170.33, samples=20 00:19:04.493 iops : min= 132, max= 256, avg=218.30, stdev=42.58, samples=20 00:19:04.493 lat (msec) : 20=0.27%, 50=17.64%, 100=70.76%, 250=11.32% 00:19:04.493 cpu : usr=36.21%, sys=2.02%, ctx=989, majf=0, minf=9 00:19:04.493 IO depths : 1=0.1%, 2=2.3%, 4=9.2%, 8=73.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:19:04.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 complete : 0=0.0%, 4=89.6%, 8=8.4%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74986: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=208, BW=834KiB/s (854kB/s)(8344KiB/10002msec) 00:19:04.493 slat (usec): min=7, max=8025, avg=21.78, stdev=214.90 00:19:04.493 clat (usec): min=1303, max=151259, avg=76581.45, stdev=26612.79 00:19:04.493 lat (usec): min=1311, max=151279, avg=76603.23, stdev=26614.78 00:19:04.493 clat percentiles (usec): 00:19:04.493 | 1.00th=[ 1827], 5.00th=[ 37487], 10.00th=[ 47973], 20.00th=[ 60556], 00:19:04.493 | 30.00th=[ 64750], 40.00th=[ 70779], 50.00th=[ 72877], 60.00th=[ 81265], 00:19:04.493 | 70.00th=[ 93848], 80.00th=[100140], 90.00th=[107480], 95.00th=[111674], 00:19:04.493 | 99.00th=[131597], 99.50th=[152044], 99.90th=[152044], 99.95th=[152044], 00:19:04.493 | 99.99th=[152044] 00:19:04.493 bw ( KiB/s): min= 528, max= 1072, per=3.67%, avg=784.00, stdev=147.30, samples=19 00:19:04.493 iops : min= 132, max= 268, avg=196.00, stdev=36.82, samples=19 00:19:04.493 lat (msec) : 2=1.53%, 4=0.96%, 10=2.01%, 20=0.29%, 50=7.62% 00:19:04.493 lat (msec) : 100=67.11%, 250=20.47% 00:19:04.493 cpu : usr=41.35%, sys=2.26%, ctx=1289, majf=0, minf=9 00:19:04.493 IO depths : 1=0.1%, 2=4.6%, 4=18.1%, 8=63.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:19:04.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 complete : 0=0.0%, 4=92.4%, 8=3.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74987: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=227, BW=911KiB/s (933kB/s)(9148KiB/10042msec) 00:19:04.493 slat (usec): min=5, max=7025, avg=17.18, stdev=146.70 00:19:04.493 clat (usec): min=1376, max=143929, avg=70132.08, stdev=22818.72 00:19:04.493 lat (usec): min=1387, max=143940, avg=70149.26, stdev=22815.26 00:19:04.493 clat percentiles (msec): 00:19:04.493 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 52], 00:19:04.493 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 73], 00:19:04.493 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 102], 95.00th=[ 108], 00:19:04.493 | 99.00th=[ 117], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:19:04.493 | 99.99th=[ 144] 00:19:04.493 bw ( KiB/s): min= 696, max= 1555, per=4.25%, avg=907.75, stdev=190.27, samples=20 00:19:04.493 iops : min= 174, max= 388, avg=226.90, stdev=47.43, samples=20 00:19:04.493 lat (msec) : 2=0.09%, 4=0.70%, 10=1.92%, 20=0.09%, 50=16.09% 00:19:04.493 lat (msec) : 100=70.66%, 250=10.45% 00:19:04.493 cpu : usr=44.89%, sys=2.42%, ctx=1582, majf=0, minf=9 00:19:04.493 IO depths : 1=0.1%, 2=0.3%, 4=1.5%, 8=81.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:04.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74988: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=229, BW=920KiB/s (942kB/s)(9204KiB/10009msec) 00:19:04.493 slat (usec): min=8, max=8028, avg=24.69, stdev=250.52 00:19:04.493 clat (msec): min=8, max=142, avg=69.47, stdev=20.54 00:19:04.493 lat (msec): min=8, max=142, avg=69.50, stdev=20.54 00:19:04.493 clat percentiles (msec): 00:19:04.493 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:19:04.493 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:19:04.493 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 107], 00:19:04.493 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 131], 99.95th=[ 142], 00:19:04.493 | 99.99th=[ 142] 00:19:04.493 bw ( KiB/s): min= 656, max= 1072, per=4.29%, avg=916.40, stdev=131.26, samples=20 00:19:04.493 iops : min= 164, max= 268, avg=229.10, stdev=32.82, samples=20 00:19:04.493 lat (msec) : 10=0.30%, 20=0.13%, 50=22.29%, 100=68.10%, 250=9.17% 00:19:04.493 cpu : usr=39.12%, sys=2.01%, ctx=1084, majf=0, minf=9 00:19:04.493 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:04.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.493 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.493 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.493 filename2: (groupid=0, jobs=1): err= 0: pid=74989: Sat Dec 7 04:34:07 2024 00:19:04.493 read: IOPS=225, BW=902KiB/s (923kB/s)(9044KiB/10031msec) 00:19:04.493 slat (usec): min=3, max=8880, avg=17.54, stdev=186.52 00:19:04.493 clat (msec): min=24, max=131, avg=70.85, stdev=18.81 00:19:04.493 lat (msec): min=24, max=131, avg=70.87, stdev=18.81 00:19:04.493 clat percentiles (msec): 00:19:04.493 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:19:04.493 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:19:04.493 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 108], 00:19:04.494 | 99.00th=[ 109], 99.50th=[ 117], 99.90th=[ 122], 99.95th=[ 124], 00:19:04.494 | 99.99th=[ 132] 00:19:04.494 bw ( KiB/s): min= 712, max= 1061, per=4.20%, avg=897.85, stdev=101.36, samples=20 00:19:04.494 iops : min= 178, max= 265, avg=224.45, stdev=25.32, samples=20 00:19:04.494 lat (msec) : 50=18.53%, 100=73.55%, 250=7.92% 00:19:04.494 cpu : usr=31.44%, sys=1.71%, ctx=842, majf=0, minf=9 00:19:04.494 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:04.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.494 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.494 issued rwts: total=2261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.494 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:04.494 00:19:04.494 Run status group 0 (all jobs): 00:19:04.494 READ: bw=20.8MiB/s (21.8MB/s), 834KiB/s-941KiB/s (854kB/s-964kB/s), io=209MiB (219MB), run=10002-10046msec 00:19:04.752 04:34:07 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:04.752 04:34:07 -- target/dif.sh@43 -- # local sub 00:19:04.752 04:34:07 -- target/dif.sh@45 -- # for sub in "$@" 00:19:04.752 04:34:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:04.752 04:34:07 -- target/dif.sh@36 -- # local sub_id=0 00:19:04.752 04:34:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:04.752 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.752 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.752 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.752 04:34:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:04.752 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.752 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.752 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.752 04:34:07 -- target/dif.sh@45 -- # for sub in "$@" 00:19:04.752 04:34:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:04.752 04:34:07 -- target/dif.sh@36 -- # local sub_id=1 00:19:04.752 04:34:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.752 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.752 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.752 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.752 04:34:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:04.752 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.752 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.752 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@45 -- # for sub in "$@" 00:19:04.753 04:34:07 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:04.753 04:34:07 -- target/dif.sh@36 -- # local sub_id=2 00:19:04.753 04:34:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:04.753 04:34:07 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:04.753 04:34:07 -- target/dif.sh@115 -- # numjobs=2 00:19:04.753 04:34:07 -- target/dif.sh@115 -- # iodepth=8 00:19:04.753 04:34:07 -- target/dif.sh@115 -- # runtime=5 00:19:04.753 04:34:07 -- target/dif.sh@115 -- # files=1 00:19:04.753 04:34:07 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:04.753 04:34:07 -- target/dif.sh@28 -- # local sub 00:19:04.753 04:34:07 -- target/dif.sh@30 -- # for sub in "$@" 00:19:04.753 04:34:07 -- target/dif.sh@31 -- # create_subsystem 0 00:19:04.753 04:34:07 -- target/dif.sh@18 -- # local sub_id=0 00:19:04.753 04:34:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 bdev_null0 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 [2024-12-07 04:34:07.923883] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@30 -- # for sub in "$@" 00:19:04.753 04:34:07 -- target/dif.sh@31 -- # create_subsystem 1 00:19:04.753 04:34:07 -- target/dif.sh@18 -- # local sub_id=1 00:19:04.753 04:34:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 bdev_null1 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.753 04:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.753 04:34:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.753 04:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.753 04:34:07 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:04.753 04:34:07 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:04.753 04:34:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:04.753 04:34:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:04.753 04:34:07 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:04.753 04:34:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:04.753 04:34:07 -- nvmf/common.sh@520 -- # config=() 00:19:04.753 04:34:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:04.753 04:34:07 -- target/dif.sh@82 -- # gen_fio_conf 00:19:04.753 04:34:07 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.753 04:34:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:04.753 04:34:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.753 04:34:07 -- target/dif.sh@54 -- # local file 00:19:04.753 04:34:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.753 04:34:07 -- common/autotest_common.sh@1330 -- # shift 00:19:04.753 04:34:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:04.753 04:34:07 -- target/dif.sh@56 -- # cat 00:19:04.753 04:34:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.753 { 00:19:04.753 "params": { 00:19:04.753 "name": "Nvme$subsystem", 00:19:04.753 "trtype": "$TEST_TRANSPORT", 00:19:04.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.753 "adrfam": "ipv4", 00:19:04.753 "trsvcid": "$NVMF_PORT", 00:19:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.753 "hdgst": ${hdgst:-false}, 00:19:04.753 "ddgst": ${ddgst:-false} 00:19:04.753 }, 00:19:04.753 "method": "bdev_nvme_attach_controller" 00:19:04.753 } 00:19:04.753 EOF 00:19:04.753 )") 00:19:04.753 04:34:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.753 04:34:07 -- nvmf/common.sh@542 -- # cat 00:19:04.753 04:34:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:04.753 04:34:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:04.753 04:34:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:04.753 04:34:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.753 04:34:07 -- target/dif.sh@72 -- # (( file <= files )) 00:19:04.753 04:34:07 -- target/dif.sh@73 -- # cat 00:19:04.753 04:34:07 -- target/dif.sh@72 -- # (( file++ )) 00:19:04.753 04:34:07 -- target/dif.sh@72 -- # (( file <= files )) 00:19:04.753 04:34:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.753 04:34:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.753 { 00:19:04.753 "params": { 00:19:04.753 "name": "Nvme$subsystem", 00:19:04.753 "trtype": "$TEST_TRANSPORT", 00:19:04.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.753 "adrfam": "ipv4", 00:19:04.753 "trsvcid": "$NVMF_PORT", 00:19:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.753 "hdgst": ${hdgst:-false}, 00:19:04.753 "ddgst": ${ddgst:-false} 00:19:04.753 }, 00:19:04.753 "method": "bdev_nvme_attach_controller" 00:19:04.753 } 00:19:04.753 EOF 00:19:04.753 )") 00:19:04.753 04:34:07 -- nvmf/common.sh@542 -- # cat 00:19:04.753 04:34:07 -- nvmf/common.sh@544 -- # jq . 00:19:04.753 04:34:07 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.753 04:34:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.753 "params": { 00:19:04.753 "name": "Nvme0", 00:19:04.753 "trtype": "tcp", 00:19:04.753 "traddr": "10.0.0.2", 00:19:04.753 "adrfam": "ipv4", 00:19:04.753 "trsvcid": "4420", 00:19:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:04.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:04.753 "hdgst": false, 00:19:04.753 "ddgst": false 00:19:04.753 }, 00:19:04.753 "method": "bdev_nvme_attach_controller" 00:19:04.753 },{ 00:19:04.753 "params": { 00:19:04.753 "name": "Nvme1", 00:19:04.753 "trtype": "tcp", 00:19:04.753 "traddr": "10.0.0.2", 00:19:04.753 "adrfam": "ipv4", 00:19:04.753 "trsvcid": "4420", 00:19:04.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.753 "hdgst": false, 00:19:04.753 "ddgst": false 00:19:04.753 }, 00:19:04.753 "method": "bdev_nvme_attach_controller" 00:19:04.753 }' 00:19:05.013 04:34:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:05.013 04:34:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:05.013 04:34:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.013 04:34:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.013 04:34:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:05.013 04:34:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:05.013 04:34:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:05.013 04:34:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:05.013 04:34:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.013 04:34:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.013 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:05.013 ... 00:19:05.013 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:05.013 ... 00:19:05.013 fio-3.35 00:19:05.013 Starting 4 threads 00:19:05.582 [2024-12-07 04:34:08.560687] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:05.582 [2024-12-07 04:34:08.560766] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:10.857 00:19:10.857 filename0: (groupid=0, jobs=1): err= 0: pid=75122: Sat Dec 7 04:34:13 2024 00:19:10.857 read: IOPS=1704, BW=13.3MiB/s (14.0MB/s)(66.6MiB/5002msec) 00:19:10.857 slat (nsec): min=6848, max=62029, avg=11434.28, stdev=5309.68 00:19:10.857 clat (usec): min=721, max=6575, avg=4645.58, stdev=457.51 00:19:10.857 lat (usec): min=729, max=6595, avg=4657.01, stdev=456.21 00:19:10.857 clat percentiles (usec): 00:19:10.857 | 1.00th=[ 2606], 5.00th=[ 3752], 10.00th=[ 3949], 20.00th=[ 4555], 00:19:10.857 | 30.00th=[ 4686], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:19:10.857 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5014], 95.00th=[ 5014], 00:19:10.857 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 6063], 99.95th=[ 6194], 00:19:10.857 | 99.99th=[ 6587] 00:19:10.857 bw ( KiB/s): min=13056, max=16000, per=20.24%, avg=13649.78, stdev=993.95, samples=9 00:19:10.857 iops : min= 1632, max= 2000, avg=1706.22, stdev=124.24, samples=9 00:19:10.857 lat (usec) : 750=0.04%, 1000=0.01% 00:19:10.857 lat (msec) : 2=0.14%, 4=11.47%, 10=88.34% 00:19:10.857 cpu : usr=91.90%, sys=7.38%, ctx=18, majf=0, minf=9 00:19:10.857 IO depths : 1=0.1%, 2=24.4%, 4=50.4%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.857 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.857 issued rwts: total=8526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.857 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:10.857 filename0: (groupid=0, jobs=1): err= 0: pid=75123: Sat Dec 7 04:34:13 2024 00:19:10.857 read: IOPS=2248, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5001msec) 00:19:10.857 slat (nsec): min=7078, max=81734, avg=15226.27, stdev=4705.27 00:19:10.857 clat (usec): min=802, max=7006, avg=3519.29, stdev=1041.23 00:19:10.857 lat (usec): min=809, max=7032, avg=3534.52, stdev=1040.82 00:19:10.857 clat percentiles (usec): 00:19:10.857 | 1.00th=[ 1827], 5.00th=[ 1975], 10.00th=[ 2057], 20.00th=[ 2540], 00:19:10.857 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3654], 60.00th=[ 4113], 00:19:10.857 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:10.857 | 99.00th=[ 5014], 99.50th=[ 5014], 99.90th=[ 5211], 99.95th=[ 6783], 00:19:10.857 | 99.99th=[ 6849] 00:19:10.857 bw ( KiB/s): min=17042, max=18320, per=26.57%, avg=17918.44, stdev=418.50, samples=9 00:19:10.857 iops : min= 2130, max= 2290, avg=2239.78, stdev=52.38, samples=9 00:19:10.857 lat (usec) : 1000=0.04% 00:19:10.857 lat (msec) : 2=6.32%, 4=51.30%, 10=42.34% 00:19:10.857 cpu : usr=91.82%, sys=7.20%, ctx=10, majf=0, minf=9 00:19:10.857 IO depths : 1=0.1%, 2=2.4%, 4=62.3%, 8=35.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.857 complete : 0=0.0%, 4=99.1%, 8=0.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.857 issued rwts: total=11247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.857 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:10.857 filename1: (groupid=0, jobs=1): err= 0: pid=75124: Sat Dec 7 04:34:13 2024 00:19:10.857 read: IOPS=2234, BW=17.5MiB/s (18.3MB/s)(87.3MiB/5002msec) 00:19:10.857 slat (nsec): min=6516, max=54675, avg=15049.73, stdev=4451.04 00:19:10.857 clat (usec): min=1457, max=6807, avg=3542.06, stdev=1026.55 00:19:10.857 lat (usec): min=1470, max=6821, avg=3557.10, stdev=1026.45 00:19:10.857 clat percentiles (usec): 00:19:10.857 | 1.00th=[ 1893], 5.00th=[ 1991], 10.00th=[ 2057], 20.00th=[ 2540], 00:19:10.857 | 30.00th=[ 2802], 40.00th=[ 2966], 50.00th=[ 3687], 60.00th=[ 4178], 00:19:10.857 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:10.857 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 5145], 00:19:10.857 | 99.99th=[ 5276] 00:19:10.857 bw ( KiB/s): min=16000, max=18320, per=26.39%, avg=17802.67, stdev=723.94, samples=9 00:19:10.858 iops : min= 2000, max= 2290, avg=2225.33, stdev=90.49, samples=9 00:19:10.858 lat (msec) : 2=5.59%, 4=50.86%, 10=43.54% 00:19:10.858 cpu : usr=91.64%, sys=7.40%, ctx=4, majf=0, minf=0 00:19:10.858 IO depths : 1=0.1%, 2=2.8%, 4=62.1%, 8=35.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.858 complete : 0=0.0%, 4=99.0%, 8=1.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.858 issued rwts: total=11177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.858 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:10.858 filename1: (groupid=0, jobs=1): err= 0: pid=75125: Sat Dec 7 04:34:13 2024 00:19:10.858 read: IOPS=2244, BW=17.5MiB/s (18.4MB/s)(87.7MiB/5003msec) 00:19:10.858 slat (nsec): min=4829, max=56610, avg=11252.94, stdev=5036.47 00:19:10.858 clat (usec): min=1371, max=6715, avg=3532.75, stdev=1041.98 00:19:10.858 lat (usec): min=1391, max=6728, avg=3544.00, stdev=1041.75 00:19:10.858 clat percentiles (usec): 00:19:10.858 | 1.00th=[ 1876], 5.00th=[ 1958], 10.00th=[ 2024], 20.00th=[ 2540], 00:19:10.858 | 30.00th=[ 2802], 40.00th=[ 2966], 50.00th=[ 3654], 60.00th=[ 4146], 00:19:10.858 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4883], 00:19:10.858 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5145], 99.95th=[ 5211], 00:19:10.858 | 99.99th=[ 6587] 00:19:10.858 bw ( KiB/s): min=16095, max=18320, per=26.52%, avg=17890.89, stdev=679.75, samples=9 00:19:10.858 iops : min= 2011, max= 2290, avg=2236.22, stdev=85.24, samples=9 00:19:10.858 lat (msec) : 2=7.76%, 4=49.16%, 10=43.09% 00:19:10.858 cpu : usr=91.94%, sys=7.08%, ctx=145, majf=0, minf=0 00:19:10.858 IO depths : 1=0.1%, 2=2.5%, 4=62.3%, 8=35.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.858 complete : 0=0.0%, 4=99.1%, 8=0.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.858 issued rwts: total=11231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.858 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:10.858 00:19:10.858 Run status group 0 (all jobs): 00:19:10.858 READ: bw=65.9MiB/s (69.1MB/s), 13.3MiB/s-17.6MiB/s (14.0MB/s-18.4MB/s), io=330MiB (346MB), run=5001-5003msec 00:19:10.858 04:34:13 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:10.858 04:34:13 -- target/dif.sh@43 -- # local sub 00:19:10.858 04:34:13 -- target/dif.sh@45 -- # for sub in "$@" 00:19:10.858 04:34:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:10.858 04:34:13 -- target/dif.sh@36 -- # local sub_id=0 00:19:10.858 04:34:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 04:34:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 04:34:13 -- target/dif.sh@45 -- # for sub in "$@" 00:19:10.858 04:34:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:10.858 04:34:13 -- target/dif.sh@36 -- # local sub_id=1 00:19:10.858 04:34:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 04:34:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 00:19:10.858 real 0m23.148s 00:19:10.858 user 2m3.546s 00:19:10.858 sys 0m8.306s 00:19:10.858 04:34:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 ************************************ 00:19:10.858 END TEST fio_dif_rand_params 00:19:10.858 ************************************ 00:19:10.858 04:34:13 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:10.858 04:34:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:10.858 04:34:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 ************************************ 00:19:10.858 START TEST fio_dif_digest 00:19:10.858 ************************************ 00:19:10.858 04:34:13 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:10.858 04:34:13 -- target/dif.sh@123 -- # local NULL_DIF 00:19:10.858 04:34:13 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:10.858 04:34:13 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:10.858 04:34:13 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:10.858 04:34:13 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:10.858 04:34:13 -- target/dif.sh@127 -- # numjobs=3 00:19:10.858 04:34:13 -- target/dif.sh@127 -- # iodepth=3 00:19:10.858 04:34:13 -- target/dif.sh@127 -- # runtime=10 00:19:10.858 04:34:13 -- target/dif.sh@128 -- # hdgst=true 00:19:10.858 04:34:13 -- target/dif.sh@128 -- # ddgst=true 00:19:10.858 04:34:13 -- target/dif.sh@130 -- # create_subsystems 0 00:19:10.858 04:34:13 -- target/dif.sh@28 -- # local sub 00:19:10.858 04:34:13 -- target/dif.sh@30 -- # for sub in "$@" 00:19:10.858 04:34:13 -- target/dif.sh@31 -- # create_subsystem 0 00:19:10.858 04:34:13 -- target/dif.sh@18 -- # local sub_id=0 00:19:10.858 04:34:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 bdev_null0 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 04:34:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 04:34:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 04:34:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:10.858 04:34:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.858 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.858 [2024-12-07 04:34:13.982262] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.858 04:34:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.858 04:34:13 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:10.858 04:34:13 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:10.858 04:34:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:10.858 04:34:13 -- nvmf/common.sh@520 -- # config=() 00:19:10.858 04:34:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.858 04:34:13 -- nvmf/common.sh@520 -- # local subsystem config 00:19:10.858 04:34:13 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.858 04:34:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:10.858 04:34:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:10.858 04:34:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:10.858 { 00:19:10.858 "params": { 00:19:10.858 "name": "Nvme$subsystem", 00:19:10.858 "trtype": "$TEST_TRANSPORT", 00:19:10.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.858 "adrfam": "ipv4", 00:19:10.858 "trsvcid": "$NVMF_PORT", 00:19:10.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.858 "hdgst": ${hdgst:-false}, 00:19:10.858 "ddgst": ${ddgst:-false} 00:19:10.858 }, 00:19:10.858 "method": "bdev_nvme_attach_controller" 00:19:10.858 } 00:19:10.858 EOF 00:19:10.858 )") 00:19:10.858 04:34:13 -- target/dif.sh@82 -- # gen_fio_conf 00:19:10.858 04:34:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.858 04:34:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:10.858 04:34:13 -- target/dif.sh@54 -- # local file 00:19:10.858 04:34:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.858 04:34:13 -- common/autotest_common.sh@1330 -- # shift 00:19:10.858 04:34:13 -- target/dif.sh@56 -- # cat 00:19:10.858 04:34:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:10.858 04:34:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.858 04:34:13 -- nvmf/common.sh@542 -- # cat 00:19:10.858 04:34:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.858 04:34:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:10.858 04:34:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:10.858 04:34:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:10.858 04:34:13 -- target/dif.sh@72 -- # (( file <= files )) 00:19:10.858 04:34:13 -- nvmf/common.sh@544 -- # jq . 00:19:10.858 04:34:13 -- nvmf/common.sh@545 -- # IFS=, 00:19:10.858 04:34:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:10.858 "params": { 00:19:10.858 "name": "Nvme0", 00:19:10.858 "trtype": "tcp", 00:19:10.858 "traddr": "10.0.0.2", 00:19:10.858 "adrfam": "ipv4", 00:19:10.858 "trsvcid": "4420", 00:19:10.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:10.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:10.858 "hdgst": true, 00:19:10.858 "ddgst": true 00:19:10.858 }, 00:19:10.858 "method": "bdev_nvme_attach_controller" 00:19:10.858 }' 00:19:10.858 04:34:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:10.858 04:34:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:10.858 04:34:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.858 04:34:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:10.858 04:34:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.858 04:34:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:10.859 04:34:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:10.859 04:34:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:10.859 04:34:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.859 04:34:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:11.117 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:11.117 ... 00:19:11.117 fio-3.35 00:19:11.117 Starting 3 threads 00:19:11.376 [2024-12-07 04:34:14.549788] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:11.376 [2024-12-07 04:34:14.549856] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:23.634 00:19:23.634 filename0: (groupid=0, jobs=1): err= 0: pid=75236: Sat Dec 7 04:34:24 2024 00:19:23.634 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(293MiB/10001msec) 00:19:23.634 slat (nsec): min=7307, max=53788, avg=15046.94, stdev=4606.84 00:19:23.634 clat (usec): min=11635, max=20719, avg=12787.71, stdev=541.28 00:19:23.634 lat (usec): min=11649, max=20743, avg=12802.76, stdev=541.63 00:19:23.634 clat percentiles (usec): 00:19:23.634 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12256], 20.00th=[12387], 00:19:23.634 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:19:23.634 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:23.634 | 99.00th=[13698], 99.50th=[13829], 99.90th=[20579], 99.95th=[20579], 00:19:23.634 | 99.99th=[20841] 00:19:23.634 bw ( KiB/s): min=29184, max=30720, per=33.27%, avg=29914.74, stdev=311.51, samples=19 00:19:23.634 iops : min= 228, max= 240, avg=233.68, stdev= 2.43, samples=19 00:19:23.634 lat (msec) : 20=99.87%, 50=0.13% 00:19:23.634 cpu : usr=91.96%, sys=7.41%, ctx=57, majf=0, minf=9 00:19:23.634 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.634 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.634 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:23.634 filename0: (groupid=0, jobs=1): err= 0: pid=75237: Sat Dec 7 04:34:24 2024 00:19:23.634 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10003msec) 00:19:23.635 slat (nsec): min=6853, max=50594, avg=10192.71, stdev=4366.37 00:19:23.635 clat (usec): min=8233, max=13960, avg=12782.69, stdev=487.07 00:19:23.635 lat (usec): min=8240, max=13973, avg=12792.88, stdev=487.28 00:19:23.635 clat percentiles (usec): 00:19:23.635 | 1.00th=[11863], 5.00th=[12125], 10.00th=[12256], 20.00th=[12387], 00:19:23.635 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:19:23.635 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:23.635 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:19:23.635 | 99.99th=[13960] 00:19:23.635 bw ( KiB/s): min=29184, max=30720, per=33.31%, avg=29952.00, stdev=362.04, samples=19 00:19:23.635 iops : min= 228, max= 240, avg=234.00, stdev= 2.83, samples=19 00:19:23.635 lat (msec) : 10=0.13%, 20=99.87% 00:19:23.635 cpu : usr=91.48%, sys=7.92%, ctx=85, majf=0, minf=0 00:19:23.635 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.635 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:23.635 filename0: (groupid=0, jobs=1): err= 0: pid=75238: Sat Dec 7 04:34:24 2024 00:19:23.635 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10002msec) 00:19:23.635 slat (nsec): min=7390, max=65316, avg=14610.68, stdev=4810.14 00:19:23.635 clat (usec): min=9249, max=13874, avg=12774.42, stdev=477.53 00:19:23.635 lat (usec): min=9258, max=13887, avg=12789.03, stdev=477.97 00:19:23.635 clat percentiles (usec): 00:19:23.635 | 1.00th=[11863], 5.00th=[12125], 10.00th=[12256], 20.00th=[12387], 00:19:23.635 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:19:23.635 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:23.635 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13829], 99.95th=[13829], 00:19:23.635 | 99.99th=[13829] 00:19:23.635 bw ( KiB/s): min=29184, max=30720, per=33.31%, avg=29952.00, stdev=362.04, samples=19 00:19:23.635 iops : min= 228, max= 240, avg=234.00, stdev= 2.83, samples=19 00:19:23.635 lat (msec) : 10=0.13%, 20=99.87% 00:19:23.635 cpu : usr=92.06%, sys=7.35%, ctx=92, majf=0, minf=9 00:19:23.635 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.635 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:23.635 00:19:23.635 Run status group 0 (all jobs): 00:19:23.635 READ: bw=87.8MiB/s (92.1MB/s), 29.2MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=878MiB (921MB), run=10001-10003msec 00:19:23.635 04:34:24 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:23.635 04:34:24 -- target/dif.sh@43 -- # local sub 00:19:23.635 04:34:24 -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.635 04:34:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:23.635 04:34:24 -- target/dif.sh@36 -- # local sub_id=0 00:19:23.635 04:34:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:23.635 04:34:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.635 04:34:24 -- common/autotest_common.sh@10 -- # set +x 00:19:23.635 04:34:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.635 04:34:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:23.635 04:34:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.635 04:34:24 -- common/autotest_common.sh@10 -- # set +x 00:19:23.635 04:34:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.635 00:19:23.635 real 0m10.908s 00:19:23.635 user 0m28.141s 00:19:23.635 sys 0m2.506s 00:19:23.635 04:34:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:23.635 ************************************ 00:19:23.635 END TEST fio_dif_digest 00:19:23.635 04:34:24 -- common/autotest_common.sh@10 -- # set +x 00:19:23.635 ************************************ 00:19:23.635 04:34:24 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:23.635 04:34:24 -- target/dif.sh@147 -- # nvmftestfini 00:19:23.635 04:34:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:23.635 04:34:24 -- nvmf/common.sh@116 -- # sync 00:19:23.635 04:34:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:23.635 04:34:24 -- nvmf/common.sh@119 -- # set +e 00:19:23.635 04:34:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:23.635 04:34:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:23.635 rmmod nvme_tcp 00:19:23.635 rmmod nvme_fabrics 00:19:23.635 rmmod nvme_keyring 00:19:23.635 04:34:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:23.635 04:34:25 -- nvmf/common.sh@123 -- # set -e 00:19:23.635 04:34:25 -- nvmf/common.sh@124 -- # return 0 00:19:23.635 04:34:25 -- nvmf/common.sh@477 -- # '[' -n 74475 ']' 00:19:23.635 04:34:25 -- nvmf/common.sh@478 -- # killprocess 74475 00:19:23.635 04:34:25 -- common/autotest_common.sh@936 -- # '[' -z 74475 ']' 00:19:23.635 04:34:25 -- common/autotest_common.sh@940 -- # kill -0 74475 00:19:23.635 04:34:25 -- common/autotest_common.sh@941 -- # uname 00:19:23.635 04:34:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.635 04:34:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74475 00:19:23.635 04:34:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:23.635 04:34:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:23.635 killing process with pid 74475 00:19:23.635 04:34:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74475' 00:19:23.635 04:34:25 -- common/autotest_common.sh@955 -- # kill 74475 00:19:23.635 04:34:25 -- common/autotest_common.sh@960 -- # wait 74475 00:19:23.635 04:34:25 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:23.635 04:34:25 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:23.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:23.635 Waiting for block devices as requested 00:19:23.635 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:23.635 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:23.635 04:34:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:23.635 04:34:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:23.635 04:34:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.635 04:34:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:23.635 04:34:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.635 04:34:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:23.635 04:34:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.635 04:34:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:23.635 00:19:23.635 real 0m59.134s 00:19:23.635 user 3m47.275s 00:19:23.635 sys 0m19.102s 00:19:23.635 04:34:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:23.635 04:34:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.635 ************************************ 00:19:23.635 END TEST nvmf_dif 00:19:23.635 ************************************ 00:19:23.635 04:34:25 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:23.635 04:34:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:23.635 04:34:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:23.635 04:34:25 -- common/autotest_common.sh@10 -- # set +x 00:19:23.635 ************************************ 00:19:23.635 START TEST nvmf_abort_qd_sizes 00:19:23.635 ************************************ 00:19:23.635 04:34:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:23.635 * Looking for test storage... 00:19:23.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:23.635 04:34:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:23.635 04:34:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:23.635 04:34:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:23.635 04:34:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:23.635 04:34:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:23.635 04:34:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:23.635 04:34:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:23.635 04:34:26 -- scripts/common.sh@335 -- # IFS=.-: 00:19:23.635 04:34:26 -- scripts/common.sh@335 -- # read -ra ver1 00:19:23.635 04:34:26 -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.635 04:34:26 -- scripts/common.sh@336 -- # read -ra ver2 00:19:23.635 04:34:26 -- scripts/common.sh@337 -- # local 'op=<' 00:19:23.635 04:34:26 -- scripts/common.sh@339 -- # ver1_l=2 00:19:23.635 04:34:26 -- scripts/common.sh@340 -- # ver2_l=1 00:19:23.635 04:34:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:23.635 04:34:26 -- scripts/common.sh@343 -- # case "$op" in 00:19:23.635 04:34:26 -- scripts/common.sh@344 -- # : 1 00:19:23.635 04:34:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:23.635 04:34:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.635 04:34:26 -- scripts/common.sh@364 -- # decimal 1 00:19:23.635 04:34:26 -- scripts/common.sh@352 -- # local d=1 00:19:23.635 04:34:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.635 04:34:26 -- scripts/common.sh@354 -- # echo 1 00:19:23.635 04:34:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:23.635 04:34:26 -- scripts/common.sh@365 -- # decimal 2 00:19:23.635 04:34:26 -- scripts/common.sh@352 -- # local d=2 00:19:23.635 04:34:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.635 04:34:26 -- scripts/common.sh@354 -- # echo 2 00:19:23.635 04:34:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:23.635 04:34:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:23.635 04:34:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:23.635 04:34:26 -- scripts/common.sh@367 -- # return 0 00:19:23.635 04:34:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.635 04:34:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:23.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.635 --rc genhtml_branch_coverage=1 00:19:23.635 --rc genhtml_function_coverage=1 00:19:23.635 --rc genhtml_legend=1 00:19:23.635 --rc geninfo_all_blocks=1 00:19:23.635 --rc geninfo_unexecuted_blocks=1 00:19:23.635 00:19:23.635 ' 00:19:23.635 04:34:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:23.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.635 --rc genhtml_branch_coverage=1 00:19:23.635 --rc genhtml_function_coverage=1 00:19:23.636 --rc genhtml_legend=1 00:19:23.636 --rc geninfo_all_blocks=1 00:19:23.636 --rc geninfo_unexecuted_blocks=1 00:19:23.636 00:19:23.636 ' 00:19:23.636 04:34:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:23.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.636 --rc genhtml_branch_coverage=1 00:19:23.636 --rc genhtml_function_coverage=1 00:19:23.636 --rc genhtml_legend=1 00:19:23.636 --rc geninfo_all_blocks=1 00:19:23.636 --rc geninfo_unexecuted_blocks=1 00:19:23.636 00:19:23.636 ' 00:19:23.636 04:34:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:23.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.636 --rc genhtml_branch_coverage=1 00:19:23.636 --rc genhtml_function_coverage=1 00:19:23.636 --rc genhtml_legend=1 00:19:23.636 --rc geninfo_all_blocks=1 00:19:23.636 --rc geninfo_unexecuted_blocks=1 00:19:23.636 00:19:23.636 ' 00:19:23.636 04:34:26 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:23.636 04:34:26 -- nvmf/common.sh@7 -- # uname -s 00:19:23.636 04:34:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.636 04:34:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.636 04:34:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.636 04:34:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.636 04:34:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.636 04:34:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.636 04:34:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.636 04:34:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.636 04:34:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.636 04:34:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.636 04:34:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b 00:19:23.636 04:34:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=9be4eab6-f2ec-4821-ab95-f758750ade2b 00:19:23.636 04:34:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.636 04:34:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.636 04:34:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:23.636 04:34:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.636 04:34:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.636 04:34:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.636 04:34:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.636 04:34:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.636 04:34:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.636 04:34:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.636 04:34:26 -- paths/export.sh@5 -- # export PATH 00:19:23.636 04:34:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.636 04:34:26 -- nvmf/common.sh@46 -- # : 0 00:19:23.636 04:34:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:23.636 04:34:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:23.636 04:34:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:23.636 04:34:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.636 04:34:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.636 04:34:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:23.636 04:34:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:23.636 04:34:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:23.636 04:34:26 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:23.636 04:34:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:23.636 04:34:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.636 04:34:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:23.636 04:34:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:23.636 04:34:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:23.636 04:34:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.636 04:34:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:23.636 04:34:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.636 04:34:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:23.636 04:34:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:23.636 04:34:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:23.636 04:34:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:23.636 04:34:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:23.636 04:34:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:23.636 04:34:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.636 04:34:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.636 04:34:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:23.636 04:34:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:23.636 04:34:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:23.636 04:34:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:23.636 04:34:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:23.636 04:34:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.636 04:34:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:23.636 04:34:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:23.636 04:34:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:23.636 04:34:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:23.636 04:34:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:23.636 04:34:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:23.636 Cannot find device "nvmf_tgt_br" 00:19:23.636 04:34:26 -- nvmf/common.sh@154 -- # true 00:19:23.636 04:34:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:23.636 Cannot find device "nvmf_tgt_br2" 00:19:23.636 04:34:26 -- nvmf/common.sh@155 -- # true 00:19:23.636 04:34:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:23.636 04:34:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:23.636 Cannot find device "nvmf_tgt_br" 00:19:23.636 04:34:26 -- nvmf/common.sh@157 -- # true 00:19:23.636 04:34:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:23.636 Cannot find device "nvmf_tgt_br2" 00:19:23.636 04:34:26 -- nvmf/common.sh@158 -- # true 00:19:23.636 04:34:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:23.636 04:34:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:23.636 04:34:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:23.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.636 04:34:26 -- nvmf/common.sh@161 -- # true 00:19:23.636 04:34:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:23.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.636 04:34:26 -- nvmf/common.sh@162 -- # true 00:19:23.636 04:34:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:23.636 04:34:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:23.636 04:34:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:23.636 04:34:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:23.636 04:34:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:23.636 04:34:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:23.636 04:34:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:23.636 04:34:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:23.636 04:34:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:23.636 04:34:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:23.636 04:34:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:23.636 04:34:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:23.636 04:34:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:23.636 04:34:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:23.636 04:34:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:23.636 04:34:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:23.636 04:34:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:23.636 04:34:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:23.636 04:34:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:23.636 04:34:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:23.636 04:34:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:23.636 04:34:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:23.636 04:34:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:23.636 04:34:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:23.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:23.636 00:19:23.636 --- 10.0.0.2 ping statistics --- 00:19:23.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.636 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:23.636 04:34:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:23.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:23.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:23.636 00:19:23.636 --- 10.0.0.3 ping statistics --- 00:19:23.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.636 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:23.636 04:34:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:23.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:19:23.636 00:19:23.636 --- 10.0.0.1 ping statistics --- 00:19:23.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.637 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:19:23.637 04:34:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.637 04:34:26 -- nvmf/common.sh@421 -- # return 0 00:19:23.637 04:34:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:23.637 04:34:26 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:23.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:24.153 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:24.153 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:24.153 04:34:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.153 04:34:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:24.153 04:34:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:24.153 04:34:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.153 04:34:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:24.153 04:34:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:24.153 04:34:27 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:24.153 04:34:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:24.153 04:34:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.153 04:34:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.153 04:34:27 -- nvmf/common.sh@469 -- # nvmfpid=75840 00:19:24.153 04:34:27 -- nvmf/common.sh@470 -- # waitforlisten 75840 00:19:24.153 04:34:27 -- common/autotest_common.sh@829 -- # '[' -z 75840 ']' 00:19:24.153 04:34:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.153 04:34:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:24.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.153 04:34:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.153 04:34:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.153 04:34:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.153 04:34:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.153 [2024-12-07 04:34:27.333514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:24.153 [2024-12-07 04:34:27.333611] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.412 [2024-12-07 04:34:27.477450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.412 [2024-12-07 04:34:27.547110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:24.412 [2024-12-07 04:34:27.547265] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.412 [2024-12-07 04:34:27.547281] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.412 [2024-12-07 04:34:27.547292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.412 [2024-12-07 04:34:27.547470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.412 [2024-12-07 04:34:27.547720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.412 [2024-12-07 04:34:27.547775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.412 [2024-12-07 04:34:27.547785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.347 04:34:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.347 04:34:28 -- common/autotest_common.sh@862 -- # return 0 00:19:25.347 04:34:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:25.347 04:34:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.347 04:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.347 04:34:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.347 04:34:28 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:25.347 04:34:28 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:25.347 04:34:28 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:25.347 04:34:28 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:25.347 04:34:28 -- scripts/common.sh@312 -- # local nvmes 00:19:25.347 04:34:28 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:25.347 04:34:28 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:25.347 04:34:28 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:25.347 04:34:28 -- scripts/common.sh@297 -- # local bdf= 00:19:25.347 04:34:28 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:25.347 04:34:28 -- scripts/common.sh@232 -- # local class 00:19:25.347 04:34:28 -- scripts/common.sh@233 -- # local subclass 00:19:25.347 04:34:28 -- scripts/common.sh@234 -- # local progif 00:19:25.347 04:34:28 -- scripts/common.sh@235 -- # printf %02x 1 00:19:25.347 04:34:28 -- scripts/common.sh@235 -- # class=01 00:19:25.347 04:34:28 -- scripts/common.sh@236 -- # printf %02x 8 00:19:25.347 04:34:28 -- scripts/common.sh@236 -- # subclass=08 00:19:25.348 04:34:28 -- scripts/common.sh@237 -- # printf %02x 2 00:19:25.348 04:34:28 -- scripts/common.sh@237 -- # progif=02 00:19:25.348 04:34:28 -- scripts/common.sh@239 -- # hash lspci 00:19:25.348 04:34:28 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:25.348 04:34:28 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:25.348 04:34:28 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:25.348 04:34:28 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:25.348 04:34:28 -- scripts/common.sh@244 -- # tr -d '"' 00:19:25.348 04:34:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:25.348 04:34:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:25.348 04:34:28 -- scripts/common.sh@15 -- # local i 00:19:25.348 04:34:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:25.348 04:34:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:25.348 04:34:28 -- scripts/common.sh@24 -- # return 0 00:19:25.348 04:34:28 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:25.348 04:34:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:25.348 04:34:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:25.348 04:34:28 -- scripts/common.sh@15 -- # local i 00:19:25.348 04:34:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:25.348 04:34:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:25.348 04:34:28 -- scripts/common.sh@24 -- # return 0 00:19:25.348 04:34:28 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:25.348 04:34:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:25.348 04:34:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:25.348 04:34:28 -- scripts/common.sh@322 -- # uname -s 00:19:25.348 04:34:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:25.348 04:34:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:25.348 04:34:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:25.348 04:34:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:25.348 04:34:28 -- scripts/common.sh@322 -- # uname -s 00:19:25.348 04:34:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:25.348 04:34:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:25.348 04:34:28 -- scripts/common.sh@327 -- # (( 2 )) 00:19:25.348 04:34:28 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:25.348 04:34:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:25.348 04:34:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.348 04:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.348 ************************************ 00:19:25.348 START TEST spdk_target_abort 00:19:25.348 ************************************ 00:19:25.348 04:34:28 -- common/autotest_common.sh@1114 -- # spdk_target 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:25.348 04:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.348 04:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.348 spdk_targetn1 00:19:25.348 04:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.348 04:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.348 04:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.348 [2024-12-07 04:34:28.551756] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.348 04:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:25.348 04:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.348 04:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.348 04:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:25.348 04:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.348 04:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.348 04:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.348 04:34:28 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:25.348 04:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.348 04:34:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.348 [2024-12-07 04:34:28.579922] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.606 04:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:25.606 04:34:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:25.607 04:34:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:25.607 04:34:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:25.607 04:34:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:25.607 04:34:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:25.607 04:34:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:28.889 Initializing NVMe Controllers 00:19:28.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:28.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:28.889 Initialization complete. Launching workers. 00:19:28.889 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10276, failed: 0 00:19:28.889 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1040, failed to submit 9236 00:19:28.889 success 763, unsuccess 277, failed 0 00:19:28.889 04:34:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:28.889 04:34:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:32.192 Initializing NVMe Controllers 00:19:32.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:32.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:32.192 Initialization complete. Launching workers. 00:19:32.192 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8879, failed: 0 00:19:32.192 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1163, failed to submit 7716 00:19:32.192 success 356, unsuccess 807, failed 0 00:19:32.192 04:34:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:32.192 04:34:35 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:35.478 Initializing NVMe Controllers 00:19:35.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:35.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:35.478 Initialization complete. Launching workers. 00:19:35.478 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31711, failed: 0 00:19:35.478 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2379, failed to submit 29332 00:19:35.478 success 477, unsuccess 1902, failed 0 00:19:35.478 04:34:38 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:19:35.478 04:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.478 04:34:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.478 04:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.478 04:34:38 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:35.478 04:34:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.478 04:34:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.478 04:34:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.478 04:34:38 -- target/abort_qd_sizes.sh@62 -- # killprocess 75840 00:19:35.478 04:34:38 -- common/autotest_common.sh@936 -- # '[' -z 75840 ']' 00:19:35.478 04:34:38 -- common/autotest_common.sh@940 -- # kill -0 75840 00:19:35.478 04:34:38 -- common/autotest_common.sh@941 -- # uname 00:19:35.478 04:34:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.478 04:34:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75840 00:19:35.478 killing process with pid 75840 00:19:35.478 04:34:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:35.478 04:34:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:35.478 04:34:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75840' 00:19:35.478 04:34:38 -- common/autotest_common.sh@955 -- # kill 75840 00:19:35.478 04:34:38 -- common/autotest_common.sh@960 -- # wait 75840 00:19:35.478 ************************************ 00:19:35.478 END TEST spdk_target_abort 00:19:35.478 ************************************ 00:19:35.478 00:19:35.478 real 0m10.193s 00:19:35.478 user 0m42.033s 00:19:35.478 sys 0m1.953s 00:19:35.478 04:34:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:35.478 04:34:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.478 04:34:38 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:19:35.478 04:34:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:35.478 04:34:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:35.478 04:34:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.478 ************************************ 00:19:35.478 START TEST kernel_target_abort 00:19:35.478 ************************************ 00:19:35.478 04:34:38 -- common/autotest_common.sh@1114 -- # kernel_target 00:19:35.478 04:34:38 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:19:35.478 04:34:38 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:19:35.478 04:34:38 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:19:35.478 04:34:38 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:19:35.478 04:34:38 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:19:35.478 04:34:38 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:35.478 04:34:38 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:35.478 04:34:38 -- nvmf/common.sh@627 -- # local block nvme 00:19:35.478 04:34:38 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:19:35.478 04:34:38 -- nvmf/common.sh@630 -- # modprobe nvmet 00:19:35.737 04:34:38 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:35.737 04:34:38 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:35.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:35.997 Waiting for block devices as requested 00:19:35.997 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:35.997 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.256 04:34:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:36.256 04:34:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:36.256 04:34:39 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:19:36.256 04:34:39 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:36.256 04:34:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:36.256 No valid GPT data, bailing 00:19:36.256 04:34:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:36.256 04:34:39 -- scripts/common.sh@393 -- # pt= 00:19:36.256 04:34:39 -- scripts/common.sh@394 -- # return 1 00:19:36.256 04:34:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:19:36.256 04:34:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:36.256 04:34:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:36.256 04:34:39 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:19:36.256 04:34:39 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:36.256 04:34:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:36.256 No valid GPT data, bailing 00:19:36.256 04:34:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:36.256 04:34:39 -- scripts/common.sh@393 -- # pt= 00:19:36.256 04:34:39 -- scripts/common.sh@394 -- # return 1 00:19:36.256 04:34:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:19:36.256 04:34:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:36.256 04:34:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:36.256 04:34:39 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:19:36.256 04:34:39 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:36.256 04:34:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:36.256 No valid GPT data, bailing 00:19:36.256 04:34:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:36.256 04:34:39 -- scripts/common.sh@393 -- # pt= 00:19:36.256 04:34:39 -- scripts/common.sh@394 -- # return 1 00:19:36.256 04:34:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:19:36.256 04:34:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:36.256 04:34:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:36.256 04:34:39 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:19:36.256 04:34:39 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:36.256 04:34:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:36.516 No valid GPT data, bailing 00:19:36.516 04:34:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:36.516 04:34:39 -- scripts/common.sh@393 -- # pt= 00:19:36.516 04:34:39 -- scripts/common.sh@394 -- # return 1 00:19:36.516 04:34:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:19:36.516 04:34:39 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:19:36.516 04:34:39 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:36.516 04:34:39 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:36.516 04:34:39 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:36.516 04:34:39 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:19:36.516 04:34:39 -- nvmf/common.sh@654 -- # echo 1 00:19:36.516 04:34:39 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:19:36.516 04:34:39 -- nvmf/common.sh@656 -- # echo 1 00:19:36.516 04:34:39 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:19:36.516 04:34:39 -- nvmf/common.sh@663 -- # echo tcp 00:19:36.516 04:34:39 -- nvmf/common.sh@664 -- # echo 4420 00:19:36.516 04:34:39 -- nvmf/common.sh@665 -- # echo ipv4 00:19:36.516 04:34:39 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:36.516 04:34:39 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9be4eab6-f2ec-4821-ab95-f758750ade2b --hostid=9be4eab6-f2ec-4821-ab95-f758750ade2b -a 10.0.0.1 -t tcp -s 4420 00:19:36.516 00:19:36.516 Discovery Log Number of Records 2, Generation counter 2 00:19:36.516 =====Discovery Log Entry 0====== 00:19:36.516 trtype: tcp 00:19:36.516 adrfam: ipv4 00:19:36.516 subtype: current discovery subsystem 00:19:36.516 treq: not specified, sq flow control disable supported 00:19:36.516 portid: 1 00:19:36.516 trsvcid: 4420 00:19:36.516 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:36.516 traddr: 10.0.0.1 00:19:36.516 eflags: none 00:19:36.516 sectype: none 00:19:36.516 =====Discovery Log Entry 1====== 00:19:36.516 trtype: tcp 00:19:36.516 adrfam: ipv4 00:19:36.516 subtype: nvme subsystem 00:19:36.516 treq: not specified, sq flow control disable supported 00:19:36.516 portid: 1 00:19:36.516 trsvcid: 4420 00:19:36.516 subnqn: kernel_target 00:19:36.516 traddr: 10.0.0.1 00:19:36.516 eflags: none 00:19:36.516 sectype: none 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:36.516 04:34:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:39.807 Initializing NVMe Controllers 00:19:39.807 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:39.807 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:39.807 Initialization complete. Launching workers. 00:19:39.807 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 33327, failed: 0 00:19:39.807 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33327, failed to submit 0 00:19:39.807 success 0, unsuccess 33327, failed 0 00:19:39.807 04:34:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:39.807 04:34:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:43.091 Initializing NVMe Controllers 00:19:43.091 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:43.091 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:43.091 Initialization complete. Launching workers. 00:19:43.091 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 67613, failed: 0 00:19:43.091 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28767, failed to submit 38846 00:19:43.091 success 0, unsuccess 28767, failed 0 00:19:43.091 04:34:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:43.091 04:34:45 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:46.374 Initializing NVMe Controllers 00:19:46.374 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:46.374 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:46.374 Initialization complete. Launching workers. 00:19:46.374 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77062, failed: 0 00:19:46.374 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19206, failed to submit 57856 00:19:46.374 success 0, unsuccess 19206, failed 0 00:19:46.374 04:34:49 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:19:46.374 04:34:49 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:19:46.374 04:34:49 -- nvmf/common.sh@677 -- # echo 0 00:19:46.374 04:34:49 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:19:46.374 04:34:49 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:46.374 04:34:49 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:46.374 04:34:49 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:46.374 04:34:49 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:19:46.374 04:34:49 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:19:46.374 00:19:46.374 real 0m10.436s 00:19:46.374 user 0m5.710s 00:19:46.374 sys 0m2.240s 00:19:46.374 04:34:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:46.374 ************************************ 00:19:46.374 END TEST kernel_target_abort 00:19:46.374 ************************************ 00:19:46.374 04:34:49 -- common/autotest_common.sh@10 -- # set +x 00:19:46.374 04:34:49 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:19:46.374 04:34:49 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:19:46.374 04:34:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:46.374 04:34:49 -- nvmf/common.sh@116 -- # sync 00:19:46.374 04:34:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:46.374 04:34:49 -- nvmf/common.sh@119 -- # set +e 00:19:46.374 04:34:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:46.374 04:34:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:46.374 rmmod nvme_tcp 00:19:46.374 rmmod nvme_fabrics 00:19:46.374 rmmod nvme_keyring 00:19:46.374 04:34:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:46.374 04:34:49 -- nvmf/common.sh@123 -- # set -e 00:19:46.374 04:34:49 -- nvmf/common.sh@124 -- # return 0 00:19:46.374 04:34:49 -- nvmf/common.sh@477 -- # '[' -n 75840 ']' 00:19:46.374 04:34:49 -- nvmf/common.sh@478 -- # killprocess 75840 00:19:46.374 04:34:49 -- common/autotest_common.sh@936 -- # '[' -z 75840 ']' 00:19:46.374 04:34:49 -- common/autotest_common.sh@940 -- # kill -0 75840 00:19:46.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75840) - No such process 00:19:46.374 Process with pid 75840 is not found 00:19:46.374 04:34:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75840 is not found' 00:19:46.374 04:34:49 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:46.374 04:34:49 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:46.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.941 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:46.941 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:46.941 04:34:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:46.941 04:34:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:46.941 04:34:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.941 04:34:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:46.941 04:34:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.941 04:34:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:46.941 04:34:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.941 04:34:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:46.941 00:19:46.941 real 0m24.201s 00:19:46.941 user 0m49.201s 00:19:46.941 sys 0m5.513s 00:19:46.941 04:34:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:46.941 04:34:50 -- common/autotest_common.sh@10 -- # set +x 00:19:46.941 ************************************ 00:19:46.941 END TEST nvmf_abort_qd_sizes 00:19:46.941 ************************************ 00:19:46.941 04:34:50 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:46.941 04:34:50 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:19:46.941 04:34:50 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:19:46.941 04:34:50 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:19:46.941 04:34:50 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:19:46.941 04:34:50 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:19:46.941 04:34:50 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:19:46.941 04:34:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.941 04:34:50 -- common/autotest_common.sh@10 -- # set +x 00:19:46.941 04:34:50 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:19:46.941 04:34:50 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:19:46.941 04:34:50 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:19:46.941 04:34:50 -- common/autotest_common.sh@10 -- # set +x 00:19:48.838 INFO: APP EXITING 00:19:48.838 INFO: killing all VMs 00:19:48.838 INFO: killing vhost app 00:19:48.838 INFO: EXIT DONE 00:19:49.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.413 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:49.413 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:49.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.979 Cleaning 00:19:49.979 Removing: /var/run/dpdk/spdk0/config 00:19:49.979 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:49.979 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:49.979 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:49.979 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:49.979 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:49.979 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:49.979 Removing: /var/run/dpdk/spdk1/config 00:19:49.979 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:49.979 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:49.979 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:49.979 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:49.979 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:49.979 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:49.979 Removing: /var/run/dpdk/spdk2/config 00:19:49.979 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:49.979 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:49.979 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:49.979 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:49.979 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:49.979 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:50.294 Removing: /var/run/dpdk/spdk3/config 00:19:50.294 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:50.294 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:50.294 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:50.294 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:50.294 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:50.294 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:50.294 Removing: /var/run/dpdk/spdk4/config 00:19:50.294 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:50.294 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:50.294 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:50.294 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:50.294 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:50.294 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:50.294 Removing: /dev/shm/nvmf_trace.0 00:19:50.294 Removing: /dev/shm/spdk_tgt_trace.pid53796 00:19:50.294 Removing: /var/run/dpdk/spdk0 00:19:50.294 Removing: /var/run/dpdk/spdk1 00:19:50.294 Removing: /var/run/dpdk/spdk2 00:19:50.294 Removing: /var/run/dpdk/spdk3 00:19:50.294 Removing: /var/run/dpdk/spdk4 00:19:50.294 Removing: /var/run/dpdk/spdk_pid53644 00:19:50.294 Removing: /var/run/dpdk/spdk_pid53796 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54049 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54234 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54387 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54458 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54536 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54634 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54718 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54751 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54781 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54855 00:19:50.294 Removing: /var/run/dpdk/spdk_pid54931 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55357 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55409 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55455 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55471 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55531 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55543 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55610 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55626 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55666 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55684 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55728 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55746 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55877 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55907 00:19:50.294 Removing: /var/run/dpdk/spdk_pid55994 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56040 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56070 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56123 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56137 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56177 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56191 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56226 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56245 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56274 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56294 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56328 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56342 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56377 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56396 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56425 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56445 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56479 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56493 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56528 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56547 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56582 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56596 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56630 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56650 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56679 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56698 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56733 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56747 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56781 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56801 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56830 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56846 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56884 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56898 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56937 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56955 00:19:50.294 Removing: /var/run/dpdk/spdk_pid56987 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57010 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57047 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57066 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57098 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57118 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57154 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57225 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57312 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57650 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57662 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57695 00:19:50.294 Removing: /var/run/dpdk/spdk_pid57707 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57721 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57739 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57757 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57765 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57783 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57801 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57809 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57827 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57845 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57857 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57871 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57889 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57898 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57915 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57933 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57952 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57977 00:19:50.568 Removing: /var/run/dpdk/spdk_pid57995 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58017 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58087 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58114 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58123 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58149 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58161 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58163 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58211 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58217 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58249 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58251 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58259 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58266 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58274 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58281 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58283 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58295 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58317 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58344 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58353 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58382 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58391 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58399 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58439 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58451 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58472 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58479 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58487 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58494 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58502 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58509 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58517 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58519 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58600 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58642 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58748 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58774 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58818 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58838 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58847 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58867 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58897 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58911 00:19:50.568 Removing: /var/run/dpdk/spdk_pid58987 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59001 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59048 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59122 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59180 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59203 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59297 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59342 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59379 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59597 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59689 00:19:50.568 Removing: /var/run/dpdk/spdk_pid59722 00:19:50.568 Removing: /var/run/dpdk/spdk_pid60059 00:19:50.568 Removing: /var/run/dpdk/spdk_pid60097 00:19:50.568 Removing: /var/run/dpdk/spdk_pid60406 00:19:50.568 Removing: /var/run/dpdk/spdk_pid60819 00:19:50.568 Removing: /var/run/dpdk/spdk_pid61088 00:19:50.568 Removing: /var/run/dpdk/spdk_pid61857 00:19:50.568 Removing: /var/run/dpdk/spdk_pid62686 00:19:50.568 Removing: /var/run/dpdk/spdk_pid62809 00:19:50.568 Removing: /var/run/dpdk/spdk_pid62871 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64150 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64367 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64683 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64796 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64930 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64944 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64972 00:19:50.568 Removing: /var/run/dpdk/spdk_pid64999 00:19:50.568 Removing: /var/run/dpdk/spdk_pid65096 00:19:50.568 Removing: /var/run/dpdk/spdk_pid65231 00:19:50.568 Removing: /var/run/dpdk/spdk_pid65386 00:19:50.568 Removing: /var/run/dpdk/spdk_pid65461 00:19:50.568 Removing: /var/run/dpdk/spdk_pid65857 00:19:50.568 Removing: /var/run/dpdk/spdk_pid66204 00:19:50.568 Removing: /var/run/dpdk/spdk_pid66212 00:19:50.568 Removing: /var/run/dpdk/spdk_pid68435 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68437 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68720 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68739 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68753 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68784 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68790 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68873 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68881 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68993 00:19:50.827 Removing: /var/run/dpdk/spdk_pid68996 00:19:50.827 Removing: /var/run/dpdk/spdk_pid69104 00:19:50.827 Removing: /var/run/dpdk/spdk_pid69106 00:19:50.827 Removing: /var/run/dpdk/spdk_pid69512 00:19:50.827 Removing: /var/run/dpdk/spdk_pid69556 00:19:50.827 Removing: /var/run/dpdk/spdk_pid69665 00:19:50.827 Removing: /var/run/dpdk/spdk_pid69744 00:19:50.827 Removing: /var/run/dpdk/spdk_pid70060 00:19:50.827 Removing: /var/run/dpdk/spdk_pid70249 00:19:50.827 Removing: /var/run/dpdk/spdk_pid70650 00:19:50.827 Removing: /var/run/dpdk/spdk_pid71182 00:19:50.827 Removing: /var/run/dpdk/spdk_pid71622 00:19:50.827 Removing: /var/run/dpdk/spdk_pid71679 00:19:50.827 Removing: /var/run/dpdk/spdk_pid71727 00:19:50.828 Removing: /var/run/dpdk/spdk_pid71783 00:19:50.828 Removing: /var/run/dpdk/spdk_pid71895 00:19:50.828 Removing: /var/run/dpdk/spdk_pid71951 00:19:50.828 Removing: /var/run/dpdk/spdk_pid72011 00:19:50.828 Removing: /var/run/dpdk/spdk_pid72067 00:19:50.828 Removing: /var/run/dpdk/spdk_pid72402 00:19:50.828 Removing: /var/run/dpdk/spdk_pid73587 00:19:50.828 Removing: /var/run/dpdk/spdk_pid73729 00:19:50.828 Removing: /var/run/dpdk/spdk_pid73977 00:19:50.828 Removing: /var/run/dpdk/spdk_pid74538 00:19:50.828 Removing: /var/run/dpdk/spdk_pid74696 00:19:50.828 Removing: /var/run/dpdk/spdk_pid74854 00:19:50.828 Removing: /var/run/dpdk/spdk_pid74951 00:19:50.828 Removing: /var/run/dpdk/spdk_pid75118 00:19:50.828 Removing: /var/run/dpdk/spdk_pid75228 00:19:50.828 Removing: /var/run/dpdk/spdk_pid75891 00:19:50.828 Removing: /var/run/dpdk/spdk_pid75926 00:19:50.828 Removing: /var/run/dpdk/spdk_pid75967 00:19:50.828 Removing: /var/run/dpdk/spdk_pid76210 00:19:50.828 Removing: /var/run/dpdk/spdk_pid76241 00:19:50.828 Removing: /var/run/dpdk/spdk_pid76276 00:19:50.828 Clean 00:19:50.828 killing process with pid 48032 00:19:50.828 killing process with pid 48035 00:19:50.828 04:34:54 -- common/autotest_common.sh@1446 -- # return 0 00:19:50.828 04:34:54 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:19:50.828 04:34:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.828 04:34:54 -- common/autotest_common.sh@10 -- # set +x 00:19:51.086 04:34:54 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:19:51.086 04:34:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.086 04:34:54 -- common/autotest_common.sh@10 -- # set +x 00:19:51.086 04:34:54 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:51.086 04:34:54 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:51.086 04:34:54 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:51.086 04:34:54 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:19:51.086 04:34:54 -- spdk/autotest.sh@383 -- # hostname 00:19:51.086 04:34:54 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:51.345 geninfo: WARNING: invalid characters removed from testname! 00:20:17.880 04:35:17 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.880 04:35:20 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:20.415 04:35:23 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.947 04:35:25 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:25.477 04:35:28 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.008 04:35:30 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:30.547 04:35:33 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:30.547 04:35:33 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:20:30.547 04:35:33 -- common/autotest_common.sh@1690 -- $ lcov --version 00:20:30.547 04:35:33 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:20:30.547 04:35:33 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:20:30.547 04:35:33 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:20:30.547 04:35:33 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:20:30.547 04:35:33 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:20:30.547 04:35:33 -- scripts/common.sh@335 -- $ IFS=.-: 00:20:30.547 04:35:33 -- scripts/common.sh@335 -- $ read -ra ver1 00:20:30.547 04:35:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:30.547 04:35:33 -- scripts/common.sh@336 -- $ read -ra ver2 00:20:30.547 04:35:33 -- scripts/common.sh@337 -- $ local 'op=<' 00:20:30.547 04:35:33 -- scripts/common.sh@339 -- $ ver1_l=2 00:20:30.547 04:35:33 -- scripts/common.sh@340 -- $ ver2_l=1 00:20:30.547 04:35:33 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:20:30.547 04:35:33 -- scripts/common.sh@343 -- $ case "$op" in 00:20:30.547 04:35:33 -- scripts/common.sh@344 -- $ : 1 00:20:30.547 04:35:33 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:20:30.547 04:35:33 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.547 04:35:33 -- scripts/common.sh@364 -- $ decimal 1 00:20:30.547 04:35:33 -- scripts/common.sh@352 -- $ local d=1 00:20:30.547 04:35:33 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:30.547 04:35:33 -- scripts/common.sh@354 -- $ echo 1 00:20:30.547 04:35:33 -- scripts/common.sh@364 -- $ ver1[v]=1 00:20:30.547 04:35:33 -- scripts/common.sh@365 -- $ decimal 2 00:20:30.547 04:35:33 -- scripts/common.sh@352 -- $ local d=2 00:20:30.547 04:35:33 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:30.547 04:35:33 -- scripts/common.sh@354 -- $ echo 2 00:20:30.547 04:35:33 -- scripts/common.sh@365 -- $ ver2[v]=2 00:20:30.547 04:35:33 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:20:30.547 04:35:33 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:20:30.547 04:35:33 -- scripts/common.sh@367 -- $ return 0 00:20:30.547 04:35:33 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.547 04:35:33 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:20:30.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.547 --rc genhtml_branch_coverage=1 00:20:30.547 --rc genhtml_function_coverage=1 00:20:30.547 --rc genhtml_legend=1 00:20:30.547 --rc geninfo_all_blocks=1 00:20:30.547 --rc geninfo_unexecuted_blocks=1 00:20:30.547 00:20:30.547 ' 00:20:30.547 04:35:33 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:20:30.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.547 --rc genhtml_branch_coverage=1 00:20:30.547 --rc genhtml_function_coverage=1 00:20:30.547 --rc genhtml_legend=1 00:20:30.547 --rc geninfo_all_blocks=1 00:20:30.547 --rc geninfo_unexecuted_blocks=1 00:20:30.547 00:20:30.547 ' 00:20:30.547 04:35:33 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:20:30.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.547 --rc genhtml_branch_coverage=1 00:20:30.547 --rc genhtml_function_coverage=1 00:20:30.547 --rc genhtml_legend=1 00:20:30.547 --rc geninfo_all_blocks=1 00:20:30.547 --rc geninfo_unexecuted_blocks=1 00:20:30.547 00:20:30.547 ' 00:20:30.547 04:35:33 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:20:30.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.547 --rc genhtml_branch_coverage=1 00:20:30.547 --rc genhtml_function_coverage=1 00:20:30.547 --rc genhtml_legend=1 00:20:30.547 --rc geninfo_all_blocks=1 00:20:30.547 --rc geninfo_unexecuted_blocks=1 00:20:30.547 00:20:30.547 ' 00:20:30.547 04:35:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:30.547 04:35:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:30.547 04:35:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.547 04:35:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.547 04:35:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.547 04:35:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.547 04:35:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.547 04:35:33 -- paths/export.sh@5 -- $ export PATH 00:20:30.547 04:35:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.547 04:35:33 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:30.547 04:35:33 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:30.547 04:35:33 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733546133.XXXXXX 00:20:30.547 04:35:33 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733546133.abM9ws 00:20:30.547 04:35:33 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:30.547 04:35:33 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:20:30.547 04:35:33 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:30.547 04:35:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:30.547 04:35:33 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:30.547 04:35:33 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:30.547 04:35:33 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:20:30.547 04:35:33 -- common/autotest_common.sh@10 -- $ set +x 00:20:30.547 04:35:33 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:20:30.547 04:35:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:30.547 04:35:33 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:30.547 04:35:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:30.547 04:35:33 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:30.547 04:35:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:30.547 04:35:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:30.547 04:35:33 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:30.547 04:35:33 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:30.547 04:35:33 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:30.547 04:35:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:30.547 + [[ -n 5231 ]] 00:20:30.547 + sudo kill 5231 00:20:30.557 [Pipeline] } 00:20:30.575 [Pipeline] // timeout 00:20:30.582 [Pipeline] } 00:20:30.597 [Pipeline] // stage 00:20:30.603 [Pipeline] } 00:20:30.617 [Pipeline] // catchError 00:20:30.626 [Pipeline] stage 00:20:30.628 [Pipeline] { (Stop VM) 00:20:30.640 [Pipeline] sh 00:20:30.932 + vagrant halt 00:20:34.273 ==> default: Halting domain... 00:20:40.851 [Pipeline] sh 00:20:41.131 + vagrant destroy -f 00:20:44.417 ==> default: Removing domain... 00:20:44.431 [Pipeline] sh 00:20:44.715 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:44.724 [Pipeline] } 00:20:44.744 [Pipeline] // stage 00:20:44.751 [Pipeline] } 00:20:44.770 [Pipeline] // dir 00:20:44.776 [Pipeline] } 00:20:44.795 [Pipeline] // wrap 00:20:44.802 [Pipeline] } 00:20:44.819 [Pipeline] // catchError 00:20:44.830 [Pipeline] stage 00:20:44.833 [Pipeline] { (Epilogue) 00:20:44.848 [Pipeline] sh 00:20:45.140 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:50.456 [Pipeline] catchError 00:20:50.458 [Pipeline] { 00:20:50.475 [Pipeline] sh 00:20:50.762 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:50.763 Artifacts sizes are good 00:20:50.772 [Pipeline] } 00:20:50.791 [Pipeline] // catchError 00:20:50.804 [Pipeline] archiveArtifacts 00:20:50.812 Archiving artifacts 00:20:50.932 [Pipeline] cleanWs 00:20:50.945 [WS-CLEANUP] Deleting project workspace... 00:20:50.945 [WS-CLEANUP] Deferred wipeout is used... 00:20:50.952 [WS-CLEANUP] done 00:20:50.954 [Pipeline] } 00:20:50.973 [Pipeline] // stage 00:20:50.978 [Pipeline] } 00:20:50.994 [Pipeline] // node 00:20:50.999 [Pipeline] End of Pipeline 00:20:51.035 Finished: SUCCESS