00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2427 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3688 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.122 Using shallow fetch with depth 1 00:00:00.122 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.122 > git --version # timeout=10 00:00:00.135 > git --version # 'git version 2.39.2' 00:00:00.135 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.149 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.149 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.692 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.703 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.714 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.714 > git config core.sparsecheckout # timeout=10 00:00:02.727 > git read-tree -mu HEAD # timeout=10 00:00:02.742 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.763 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.763 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.985 [Pipeline] Start of Pipeline 00:00:03.000 [Pipeline] library 00:00:03.002 Loading library shm_lib@master 00:00:03.003 Library shm_lib@master is cached. Copying from home. 00:00:03.022 [Pipeline] node 00:00:03.036 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:03.038 [Pipeline] { 00:00:03.050 [Pipeline] catchError 00:00:03.053 [Pipeline] { 00:00:03.066 [Pipeline] wrap 00:00:03.073 [Pipeline] { 00:00:03.081 [Pipeline] stage 00:00:03.083 [Pipeline] { (Prologue) 00:00:03.102 [Pipeline] echo 00:00:03.103 Node: VM-host-SM9 00:00:03.110 [Pipeline] cleanWs 00:00:03.120 [WS-CLEANUP] Deleting project workspace... 00:00:03.120 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.126 [WS-CLEANUP] done 00:00:03.316 [Pipeline] setCustomBuildProperty 00:00:03.411 [Pipeline] httpRequest 00:00:03.734 [Pipeline] echo 00:00:03.736 Sorcerer 10.211.164.20 is alive 00:00:03.746 [Pipeline] retry 00:00:03.748 [Pipeline] { 00:00:03.762 [Pipeline] httpRequest 00:00:03.767 HttpMethod: GET 00:00:03.767 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.768 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.768 Response Code: HTTP/1.1 200 OK 00:00:03.769 Success: Status code 200 is in the accepted range: 200,404 00:00:03.769 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.917 [Pipeline] } 00:00:03.933 [Pipeline] // retry 00:00:03.939 [Pipeline] sh 00:00:04.216 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.232 [Pipeline] httpRequest 00:00:04.544 [Pipeline] echo 00:00:04.546 Sorcerer 10.211.164.20 is alive 00:00:04.555 [Pipeline] retry 00:00:04.557 [Pipeline] { 00:00:04.574 [Pipeline] httpRequest 00:00:04.579 HttpMethod: GET 00:00:04.579 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:04.580 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:04.581 Response Code: HTTP/1.1 200 OK 00:00:04.582 Success: Status code 200 is in the accepted range: 200,404 00:00:04.582 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:15.672 [Pipeline] } 00:00:15.690 [Pipeline] // retry 00:00:15.699 [Pipeline] sh 00:00:15.979 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:18.523 [Pipeline] sh 00:00:18.804 + git -C spdk log --oneline -n5 00:00:18.804 c13c99a5e test: Various fixes for Fedora40 00:00:18.804 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:18.804 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:18.804 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:18.804 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:18.824 [Pipeline] writeFile 00:00:18.838 [Pipeline] sh 00:00:19.120 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:19.134 [Pipeline] sh 00:00:19.414 + cat autorun-spdk.conf 00:00:19.414 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:19.414 SPDK_TEST_NVMF=1 00:00:19.414 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:19.414 SPDK_TEST_URING=1 00:00:19.414 SPDK_TEST_VFIOUSER=1 00:00:19.414 SPDK_TEST_USDT=1 00:00:19.414 SPDK_RUN_UBSAN=1 00:00:19.414 NET_TYPE=virt 00:00:19.414 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:19.422 RUN_NIGHTLY=1 00:00:19.424 [Pipeline] } 00:00:19.438 [Pipeline] // stage 00:00:19.454 [Pipeline] stage 00:00:19.456 [Pipeline] { (Run VM) 00:00:19.469 [Pipeline] sh 00:00:19.750 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:19.750 + echo 'Start stage prepare_nvme.sh' 00:00:19.750 Start stage prepare_nvme.sh 00:00:19.750 + [[ -n 3 ]] 00:00:19.750 + disk_prefix=ex3 00:00:19.750 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:19.750 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:19.750 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:19.750 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:19.750 ++ SPDK_TEST_NVMF=1 00:00:19.750 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:19.750 ++ SPDK_TEST_URING=1 00:00:19.750 ++ SPDK_TEST_VFIOUSER=1 00:00:19.750 ++ SPDK_TEST_USDT=1 00:00:19.750 ++ SPDK_RUN_UBSAN=1 00:00:19.750 ++ NET_TYPE=virt 00:00:19.750 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:19.750 ++ RUN_NIGHTLY=1 00:00:19.750 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:19.750 + nvme_files=() 00:00:19.750 + declare -A nvme_files 00:00:19.750 + backend_dir=/var/lib/libvirt/images/backends 00:00:19.750 + nvme_files['nvme.img']=5G 00:00:19.750 + nvme_files['nvme-cmb.img']=5G 00:00:19.750 + nvme_files['nvme-multi0.img']=4G 00:00:19.750 + nvme_files['nvme-multi1.img']=4G 00:00:19.750 + nvme_files['nvme-multi2.img']=4G 00:00:19.750 + nvme_files['nvme-openstack.img']=8G 00:00:19.750 + nvme_files['nvme-zns.img']=5G 00:00:19.750 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:19.750 + (( SPDK_TEST_FTL == 1 )) 00:00:19.750 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:19.750 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:19.750 + for nvme in "${!nvme_files[@]}" 00:00:19.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:19.750 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:19.750 + for nvme in "${!nvme_files[@]}" 00:00:19.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:19.750 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:19.750 + for nvme in "${!nvme_files[@]}" 00:00:19.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:19.750 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:19.750 + for nvme in "${!nvme_files[@]}" 00:00:19.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:19.750 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:19.750 + for nvme in "${!nvme_files[@]}" 00:00:19.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:19.750 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:19.750 + for nvme in "${!nvme_files[@]}" 00:00:19.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:20.007 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:20.007 + for nvme in "${!nvme_files[@]}" 00:00:20.007 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:20.007 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:20.007 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:20.007 + echo 'End stage prepare_nvme.sh' 00:00:20.007 End stage prepare_nvme.sh 00:00:20.018 [Pipeline] sh 00:00:20.298 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:20.298 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:20.556 00:00:20.556 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:20.556 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:20.556 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:20.556 HELP=0 00:00:20.556 DRY_RUN=0 00:00:20.556 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:20.556 NVME_DISKS_TYPE=nvme,nvme, 00:00:20.556 NVME_AUTO_CREATE=0 00:00:20.556 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:20.556 NVME_CMB=,, 00:00:20.556 NVME_PMR=,, 00:00:20.556 NVME_ZNS=,, 00:00:20.556 NVME_MS=,, 00:00:20.556 NVME_FDP=,, 00:00:20.556 SPDK_VAGRANT_DISTRO=fedora39 00:00:20.556 SPDK_VAGRANT_VMCPU=10 00:00:20.556 SPDK_VAGRANT_VMRAM=12288 00:00:20.556 SPDK_VAGRANT_PROVIDER=libvirt 00:00:20.556 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:20.556 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:20.556 SPDK_OPENSTACK_NETWORK=0 00:00:20.556 VAGRANT_PACKAGE_BOX=0 00:00:20.556 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:20.556 FORCE_DISTRO=true 00:00:20.556 VAGRANT_BOX_VERSION= 00:00:20.556 EXTRA_VAGRANTFILES= 00:00:20.556 NIC_MODEL=e1000 00:00:20.556 00:00:20.556 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:20.556 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:23.878 Bringing machine 'default' up with 'libvirt' provider... 00:00:23.878 ==> default: Creating image (snapshot of base box volume). 00:00:24.137 ==> default: Creating domain with the following settings... 00:00:24.137 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733124469_00d450b9a9d37b7b62d6 00:00:24.137 ==> default: -- Domain type: kvm 00:00:24.137 ==> default: -- Cpus: 10 00:00:24.137 ==> default: -- Feature: acpi 00:00:24.137 ==> default: -- Feature: apic 00:00:24.137 ==> default: -- Feature: pae 00:00:24.137 ==> default: -- Memory: 12288M 00:00:24.137 ==> default: -- Memory Backing: hugepages: 00:00:24.137 ==> default: -- Management MAC: 00:00:24.137 ==> default: -- Loader: 00:00:24.137 ==> default: -- Nvram: 00:00:24.137 ==> default: -- Base box: spdk/fedora39 00:00:24.137 ==> default: -- Storage pool: default 00:00:24.137 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733124469_00d450b9a9d37b7b62d6.img (20G) 00:00:24.137 ==> default: -- Volume Cache: default 00:00:24.137 ==> default: -- Kernel: 00:00:24.137 ==> default: -- Initrd: 00:00:24.137 ==> default: -- Graphics Type: vnc 00:00:24.137 ==> default: -- Graphics Port: -1 00:00:24.137 ==> default: -- Graphics IP: 127.0.0.1 00:00:24.137 ==> default: -- Graphics Password: Not defined 00:00:24.138 ==> default: -- Video Type: cirrus 00:00:24.138 ==> default: -- Video VRAM: 9216 00:00:24.138 ==> default: -- Sound Type: 00:00:24.138 ==> default: -- Keymap: en-us 00:00:24.138 ==> default: -- TPM Path: 00:00:24.138 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:24.138 ==> default: -- Command line args: 00:00:24.138 ==> default: -> value=-device, 00:00:24.138 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:24.138 ==> default: -> value=-drive, 00:00:24.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:24.138 ==> default: -> value=-device, 00:00:24.138 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.138 ==> default: -> value=-device, 00:00:24.138 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:24.138 ==> default: -> value=-drive, 00:00:24.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:24.138 ==> default: -> value=-device, 00:00:24.138 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.138 ==> default: -> value=-drive, 00:00:24.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:24.138 ==> default: -> value=-device, 00:00:24.138 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.138 ==> default: -> value=-drive, 00:00:24.138 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:24.138 ==> default: -> value=-device, 00:00:24.138 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:24.138 ==> default: Creating shared folders metadata... 00:00:24.138 ==> default: Starting domain. 00:00:25.521 ==> default: Waiting for domain to get an IP address... 00:00:40.412 ==> default: Waiting for SSH to become available... 00:00:41.790 ==> default: Configuring and enabling network interfaces... 00:00:45.986 default: SSH address: 192.168.121.198:22 00:00:45.986 default: SSH username: vagrant 00:00:45.986 default: SSH auth method: private key 00:00:48.570 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:00:56.701 ==> default: Mounting SSHFS shared folder... 00:00:57.637 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:00:57.637 ==> default: Checking Mount.. 00:00:59.018 ==> default: Folder Successfully Mounted! 00:00:59.018 ==> default: Running provisioner: file... 00:00:59.955 default: ~/.gitconfig => .gitconfig 00:01:00.521 00:01:00.521 SUCCESS! 00:01:00.521 00:01:00.521 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:00.521 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:00.521 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:00.521 00:01:00.530 [Pipeline] } 00:01:00.543 [Pipeline] // stage 00:01:00.551 [Pipeline] dir 00:01:00.551 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:00.553 [Pipeline] { 00:01:00.563 [Pipeline] catchError 00:01:00.564 [Pipeline] { 00:01:00.575 [Pipeline] sh 00:01:00.854 + vagrant ssh-config --host vagrant 00:01:00.854 + sed -ne /^Host/,$p 00:01:00.854 + tee ssh_conf 00:01:04.146 Host vagrant 00:01:04.146 HostName 192.168.121.198 00:01:04.146 User vagrant 00:01:04.146 Port 22 00:01:04.147 UserKnownHostsFile /dev/null 00:01:04.147 StrictHostKeyChecking no 00:01:04.147 PasswordAuthentication no 00:01:04.147 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:04.147 IdentitiesOnly yes 00:01:04.147 LogLevel FATAL 00:01:04.147 ForwardAgent yes 00:01:04.147 ForwardX11 yes 00:01:04.147 00:01:04.161 [Pipeline] withEnv 00:01:04.163 [Pipeline] { 00:01:04.177 [Pipeline] sh 00:01:04.459 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:04.459 source /etc/os-release 00:01:04.459 [[ -e /image.version ]] && img=$(< /image.version) 00:01:04.459 # Minimal, systemd-like check. 00:01:04.459 if [[ -e /.dockerenv ]]; then 00:01:04.459 # Clear garbage from the node's name: 00:01:04.459 # agt-er_autotest_547-896 -> autotest_547-896 00:01:04.459 # $HOSTNAME is the actual container id 00:01:04.459 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:04.459 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:04.459 # We can assume this is a mount from a host where container is running, 00:01:04.459 # so fetch its hostname to easily identify the target swarm worker. 00:01:04.459 container="$(< /etc/hostname) ($agent)" 00:01:04.459 else 00:01:04.459 # Fallback 00:01:04.459 container=$agent 00:01:04.459 fi 00:01:04.459 fi 00:01:04.459 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:04.459 00:01:04.732 [Pipeline] } 00:01:04.748 [Pipeline] // withEnv 00:01:04.758 [Pipeline] setCustomBuildProperty 00:01:04.774 [Pipeline] stage 00:01:04.777 [Pipeline] { (Tests) 00:01:04.795 [Pipeline] sh 00:01:05.078 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:05.352 [Pipeline] sh 00:01:05.634 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:05.910 [Pipeline] timeout 00:01:05.910 Timeout set to expire in 1 hr 0 min 00:01:05.912 [Pipeline] { 00:01:05.926 [Pipeline] sh 00:01:06.208 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:06.777 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:06.789 [Pipeline] sh 00:01:07.070 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:07.344 [Pipeline] sh 00:01:07.625 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:07.900 [Pipeline] sh 00:01:08.180 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:08.441 ++ readlink -f spdk_repo 00:01:08.441 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:08.441 + [[ -n /home/vagrant/spdk_repo ]] 00:01:08.441 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:08.441 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:08.441 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:08.441 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:08.441 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:08.441 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:08.441 + cd /home/vagrant/spdk_repo 00:01:08.441 + source /etc/os-release 00:01:08.441 ++ NAME='Fedora Linux' 00:01:08.441 ++ VERSION='39 (Cloud Edition)' 00:01:08.441 ++ ID=fedora 00:01:08.441 ++ VERSION_ID=39 00:01:08.441 ++ VERSION_CODENAME= 00:01:08.441 ++ PLATFORM_ID=platform:f39 00:01:08.441 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:08.441 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.441 ++ LOGO=fedora-logo-icon 00:01:08.441 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:08.441 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.441 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:08.441 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.441 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.441 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.441 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:08.441 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.441 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:08.441 ++ SUPPORT_END=2024-11-12 00:01:08.441 ++ VARIANT='Cloud Edition' 00:01:08.441 ++ VARIANT_ID=cloud 00:01:08.441 + uname -a 00:01:08.441 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:08.441 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:08.441 Hugepages 00:01:08.441 node hugesize free / total 00:01:08.441 node0 1048576kB 0 / 0 00:01:08.441 node0 2048kB 0 / 0 00:01:08.441 00:01:08.441 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.441 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:08.441 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:08.441 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:08.701 + rm -f /tmp/spdk-ld-path 00:01:08.701 + source autorun-spdk.conf 00:01:08.701 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.701 ++ SPDK_TEST_NVMF=1 00:01:08.701 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.701 ++ SPDK_TEST_URING=1 00:01:08.701 ++ SPDK_TEST_VFIOUSER=1 00:01:08.701 ++ SPDK_TEST_USDT=1 00:01:08.701 ++ SPDK_RUN_UBSAN=1 00:01:08.701 ++ NET_TYPE=virt 00:01:08.701 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.701 ++ RUN_NIGHTLY=1 00:01:08.701 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:08.701 + [[ -n '' ]] 00:01:08.701 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:08.701 + for M in /var/spdk/build-*-manifest.txt 00:01:08.701 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:08.701 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:08.701 + for M in /var/spdk/build-*-manifest.txt 00:01:08.701 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:08.701 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:08.701 + for M in /var/spdk/build-*-manifest.txt 00:01:08.701 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:08.701 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:08.701 ++ uname 00:01:08.701 + [[ Linux == \L\i\n\u\x ]] 00:01:08.701 + sudo dmesg -T 00:01:08.701 + sudo dmesg --clear 00:01:08.701 + dmesg_pid=5244 00:01:08.701 + [[ Fedora Linux == FreeBSD ]] 00:01:08.701 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.701 + sudo dmesg -Tw 00:01:08.701 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.701 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:08.701 + [[ -x /usr/src/fio-static/fio ]] 00:01:08.701 + export FIO_BIN=/usr/src/fio-static/fio 00:01:08.701 + FIO_BIN=/usr/src/fio-static/fio 00:01:08.701 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:08.701 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:08.701 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:08.701 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.701 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.701 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:08.701 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.701 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.701 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:08.701 Test configuration: 00:01:08.701 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.701 SPDK_TEST_NVMF=1 00:01:08.701 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.701 SPDK_TEST_URING=1 00:01:08.701 SPDK_TEST_VFIOUSER=1 00:01:08.701 SPDK_TEST_USDT=1 00:01:08.701 SPDK_RUN_UBSAN=1 00:01:08.701 NET_TYPE=virt 00:01:08.701 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.701 RUN_NIGHTLY=1 07:28:34 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:08.701 07:28:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:08.701 07:28:34 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:08.701 07:28:34 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:08.701 07:28:34 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:08.701 07:28:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.701 07:28:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.701 07:28:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.701 07:28:34 -- paths/export.sh@5 -- $ export PATH 00:01:08.701 07:28:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.701 07:28:34 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:08.701 07:28:34 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:08.701 07:28:34 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733124514.XXXXXX 00:01:08.701 07:28:34 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733124514.fUArIH 00:01:08.701 07:28:34 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:08.701 07:28:34 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:08.701 07:28:34 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:08.701 07:28:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:08.701 07:28:34 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:08.701 07:28:34 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:08.701 07:28:34 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:08.701 07:28:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.961 07:28:34 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:08.961 07:28:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.961 07:28:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.961 07:28:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:08.961 07:28:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.961 Mon Dec 2 07:28:34 AM UTC 2024 00:01:08.961 07:28:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.961 LTS-67-gc13c99a5e 00:01:08.961 07:28:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.961 07:28:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.961 07:28:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.961 07:28:34 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:08.961 07:28:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:08.961 07:28:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.961 ************************************ 00:01:08.961 START TEST ubsan 00:01:08.961 ************************************ 00:01:08.961 using ubsan 00:01:08.961 07:28:34 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:08.961 00:01:08.961 real 0m0.001s 00:01:08.961 user 0m0.000s 00:01:08.961 sys 0m0.000s 00:01:08.961 07:28:34 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:08.961 07:28:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.961 ************************************ 00:01:08.961 END TEST ubsan 00:01:08.961 ************************************ 00:01:08.961 07:28:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.961 07:28:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.961 07:28:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.961 07:28:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.961 07:28:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.961 07:28:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.961 07:28:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.961 07:28:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.961 07:28:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:09.221 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:09.221 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:09.480 Using 'verbs' RDMA provider 00:01:25.336 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:37.548 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:37.548 Creating mk/config.mk...done. 00:01:37.548 Creating mk/cc.flags.mk...done. 00:01:37.548 Type 'make' to build. 00:01:37.548 07:29:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:37.548 07:29:02 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:37.548 07:29:02 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:37.548 07:29:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.548 ************************************ 00:01:37.548 START TEST make 00:01:37.548 ************************************ 00:01:37.548 07:29:02 -- common/autotest_common.sh@1114 -- $ make -j10 00:01:37.548 make[1]: Nothing to be done for 'all'. 00:01:38.114 The Meson build system 00:01:38.114 Version: 1.5.0 00:01:38.114 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:01:38.114 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:38.114 Build type: native build 00:01:38.114 Project name: libvfio-user 00:01:38.114 Project version: 0.0.1 00:01:38.114 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:38.114 C linker for the host machine: cc ld.bfd 2.40-14 00:01:38.114 Host machine cpu family: x86_64 00:01:38.114 Host machine cpu: x86_64 00:01:38.114 Run-time dependency threads found: YES 00:01:38.114 Library dl found: YES 00:01:38.114 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.114 Run-time dependency json-c found: YES 0.17 00:01:38.114 Run-time dependency cmocka found: YES 1.1.7 00:01:38.114 Program pytest-3 found: NO 00:01:38.114 Program flake8 found: NO 00:01:38.114 Program misspell-fixer found: NO 00:01:38.114 Program restructuredtext-lint found: NO 00:01:38.114 Program valgrind found: YES (/usr/bin/valgrind) 00:01:38.114 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.114 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.114 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.114 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:38.114 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:01:38.114 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:01:38.114 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:38.114 Build targets in project: 8 00:01:38.114 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:38.114 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:38.114 00:01:38.114 libvfio-user 0.0.1 00:01:38.114 00:01:38.114 User defined options 00:01:38.114 buildtype : debug 00:01:38.114 default_library: shared 00:01:38.114 libdir : /usr/local/lib 00:01:38.114 00:01:38.114 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.681 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:38.681 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:38.681 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:38.940 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:38.940 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:38.940 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:38.940 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:38.940 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:38.940 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:38.940 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:38.940 [10/37] Compiling C object samples/null.p/null.c.o 00:01:38.940 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:38.940 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:38.940 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:38.940 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:38.940 [15/37] Compiling C object samples/server.p/server.c.o 00:01:38.940 [16/37] Compiling C object samples/client.p/client.c.o 00:01:38.940 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:38.940 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:39.200 [19/37] Linking target samples/client 00:01:39.200 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:39.200 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:39.200 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:39.200 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:39.200 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:39.200 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:39.200 [26/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:39.200 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:39.200 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:39.200 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:39.200 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:39.200 [31/37] Linking target test/unit_tests 00:01:39.459 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:39.459 [33/37] Linking target samples/null 00:01:39.459 [34/37] Linking target samples/server 00:01:39.459 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:39.459 [36/37] Linking target samples/gpio-pci-idio-16 00:01:39.459 [37/37] Linking target samples/lspci 00:01:39.459 INFO: autodetecting backend as ninja 00:01:39.459 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:39.459 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:40.026 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:40.026 ninja: no work to do. 00:01:48.140 The Meson build system 00:01:48.140 Version: 1.5.0 00:01:48.140 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:48.140 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:48.140 Build type: native build 00:01:48.140 Program cat found: YES (/usr/bin/cat) 00:01:48.140 Project name: DPDK 00:01:48.140 Project version: 23.11.0 00:01:48.140 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:48.140 C linker for the host machine: cc ld.bfd 2.40-14 00:01:48.140 Host machine cpu family: x86_64 00:01:48.140 Host machine cpu: x86_64 00:01:48.140 Message: ## Building in Developer Mode ## 00:01:48.140 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.140 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.140 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.140 Program python3 found: YES (/usr/bin/python3) 00:01:48.140 Program cat found: YES (/usr/bin/cat) 00:01:48.140 Compiler for C supports arguments -march=native: YES 00:01:48.140 Checking for size of "void *" : 8 00:01:48.140 Checking for size of "void *" : 8 (cached) 00:01:48.140 Library m found: YES 00:01:48.140 Library numa found: YES 00:01:48.140 Has header "numaif.h" : YES 00:01:48.140 Library fdt found: NO 00:01:48.140 Library execinfo found: NO 00:01:48.140 Has header "execinfo.h" : YES 00:01:48.140 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:48.140 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.140 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.140 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.140 Run-time dependency openssl found: YES 3.1.1 00:01:48.140 Run-time dependency libpcap found: YES 1.10.4 00:01:48.140 Has header "pcap.h" with dependency libpcap: YES 00:01:48.140 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.140 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.140 Compiler for C supports arguments -Wformat: YES 00:01:48.140 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.140 Compiler for C supports arguments -Wformat-security: NO 00:01:48.140 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.140 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.140 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.140 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.140 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.140 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.140 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.140 Compiler for C supports arguments -Wundef: YES 00:01:48.140 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.140 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.140 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.140 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.140 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.140 Program objdump found: YES (/usr/bin/objdump) 00:01:48.140 Compiler for C supports arguments -mavx512f: YES 00:01:48.140 Checking if "AVX512 checking" compiles: YES 00:01:48.140 Fetching value of define "__SSE4_2__" : 1 00:01:48.140 Fetching value of define "__AES__" : 1 00:01:48.140 Fetching value of define "__AVX__" : 1 00:01:48.140 Fetching value of define "__AVX2__" : 1 00:01:48.140 Fetching value of define "__AVX512BW__" : (undefined) 00:01:48.140 Fetching value of define "__AVX512CD__" : (undefined) 00:01:48.140 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:48.140 Fetching value of define "__AVX512F__" : (undefined) 00:01:48.140 Fetching value of define "__AVX512VL__" : (undefined) 00:01:48.140 Fetching value of define "__PCLMUL__" : 1 00:01:48.140 Fetching value of define "__RDRND__" : 1 00:01:48.140 Fetching value of define "__RDSEED__" : 1 00:01:48.140 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.140 Fetching value of define "__znver1__" : (undefined) 00:01:48.140 Fetching value of define "__znver2__" : (undefined) 00:01:48.140 Fetching value of define "__znver3__" : (undefined) 00:01:48.140 Fetching value of define "__znver4__" : (undefined) 00:01:48.140 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.140 Message: lib/log: Defining dependency "log" 00:01:48.140 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.140 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.140 Checking for function "getentropy" : NO 00:01:48.140 Message: lib/eal: Defining dependency "eal" 00:01:48.140 Message: lib/ring: Defining dependency "ring" 00:01:48.140 Message: lib/rcu: Defining dependency "rcu" 00:01:48.140 Message: lib/mempool: Defining dependency "mempool" 00:01:48.140 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.140 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.140 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.140 Compiler for C supports arguments -mpclmul: YES 00:01:48.140 Compiler for C supports arguments -maes: YES 00:01:48.140 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.140 Compiler for C supports arguments -mavx512bw: YES 00:01:48.140 Compiler for C supports arguments -mavx512dq: YES 00:01:48.140 Compiler for C supports arguments -mavx512vl: YES 00:01:48.140 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.140 Compiler for C supports arguments -mavx2: YES 00:01:48.140 Compiler for C supports arguments -mavx: YES 00:01:48.140 Message: lib/net: Defining dependency "net" 00:01:48.140 Message: lib/meter: Defining dependency "meter" 00:01:48.140 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.140 Message: lib/pci: Defining dependency "pci" 00:01:48.140 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.140 Message: lib/hash: Defining dependency "hash" 00:01:48.140 Message: lib/timer: Defining dependency "timer" 00:01:48.140 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.140 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.140 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.140 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.140 Message: lib/power: Defining dependency "power" 00:01:48.140 Message: lib/reorder: Defining dependency "reorder" 00:01:48.140 Message: lib/security: Defining dependency "security" 00:01:48.140 Has header "linux/userfaultfd.h" : YES 00:01:48.140 Has header "linux/vduse.h" : YES 00:01:48.140 Message: lib/vhost: Defining dependency "vhost" 00:01:48.140 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.140 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.140 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.140 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.140 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.140 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.140 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.140 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.140 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.140 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.140 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:48.140 Configuring doxy-api-html.conf using configuration 00:01:48.140 Configuring doxy-api-man.conf using configuration 00:01:48.140 Program mandb found: YES (/usr/bin/mandb) 00:01:48.140 Program sphinx-build found: NO 00:01:48.140 Configuring rte_build_config.h using configuration 00:01:48.140 Message: 00:01:48.140 ================= 00:01:48.140 Applications Enabled 00:01:48.140 ================= 00:01:48.140 00:01:48.140 apps: 00:01:48.140 00:01:48.140 00:01:48.140 Message: 00:01:48.140 ================= 00:01:48.140 Libraries Enabled 00:01:48.140 ================= 00:01:48.140 00:01:48.140 libs: 00:01:48.140 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.140 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.140 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.140 00:01:48.140 Message: 00:01:48.140 =============== 00:01:48.140 Drivers Enabled 00:01:48.140 =============== 00:01:48.140 00:01:48.140 common: 00:01:48.140 00:01:48.140 bus: 00:01:48.140 pci, vdev, 00:01:48.140 mempool: 00:01:48.140 ring, 00:01:48.140 dma: 00:01:48.140 00:01:48.140 net: 00:01:48.140 00:01:48.140 crypto: 00:01:48.140 00:01:48.140 compress: 00:01:48.140 00:01:48.140 vdpa: 00:01:48.140 00:01:48.140 00:01:48.140 Message: 00:01:48.140 ================= 00:01:48.140 Content Skipped 00:01:48.140 ================= 00:01:48.140 00:01:48.140 apps: 00:01:48.140 dumpcap: explicitly disabled via build config 00:01:48.140 graph: explicitly disabled via build config 00:01:48.140 pdump: explicitly disabled via build config 00:01:48.140 proc-info: explicitly disabled via build config 00:01:48.140 test-acl: explicitly disabled via build config 00:01:48.140 test-bbdev: explicitly disabled via build config 00:01:48.140 test-cmdline: explicitly disabled via build config 00:01:48.140 test-compress-perf: explicitly disabled via build config 00:01:48.140 test-crypto-perf: explicitly disabled via build config 00:01:48.140 test-dma-perf: explicitly disabled via build config 00:01:48.140 test-eventdev: explicitly disabled via build config 00:01:48.140 test-fib: explicitly disabled via build config 00:01:48.141 test-flow-perf: explicitly disabled via build config 00:01:48.141 test-gpudev: explicitly disabled via build config 00:01:48.141 test-mldev: explicitly disabled via build config 00:01:48.141 test-pipeline: explicitly disabled via build config 00:01:48.141 test-pmd: explicitly disabled via build config 00:01:48.141 test-regex: explicitly disabled via build config 00:01:48.141 test-sad: explicitly disabled via build config 00:01:48.141 test-security-perf: explicitly disabled via build config 00:01:48.141 00:01:48.141 libs: 00:01:48.141 metrics: explicitly disabled via build config 00:01:48.141 acl: explicitly disabled via build config 00:01:48.141 bbdev: explicitly disabled via build config 00:01:48.141 bitratestats: explicitly disabled via build config 00:01:48.141 bpf: explicitly disabled via build config 00:01:48.141 cfgfile: explicitly disabled via build config 00:01:48.141 distributor: explicitly disabled via build config 00:01:48.141 efd: explicitly disabled via build config 00:01:48.141 eventdev: explicitly disabled via build config 00:01:48.141 dispatcher: explicitly disabled via build config 00:01:48.141 gpudev: explicitly disabled via build config 00:01:48.141 gro: explicitly disabled via build config 00:01:48.141 gso: explicitly disabled via build config 00:01:48.141 ip_frag: explicitly disabled via build config 00:01:48.141 jobstats: explicitly disabled via build config 00:01:48.141 latencystats: explicitly disabled via build config 00:01:48.141 lpm: explicitly disabled via build config 00:01:48.141 member: explicitly disabled via build config 00:01:48.141 pcapng: explicitly disabled via build config 00:01:48.141 rawdev: explicitly disabled via build config 00:01:48.141 regexdev: explicitly disabled via build config 00:01:48.141 mldev: explicitly disabled via build config 00:01:48.141 rib: explicitly disabled via build config 00:01:48.141 sched: explicitly disabled via build config 00:01:48.141 stack: explicitly disabled via build config 00:01:48.141 ipsec: explicitly disabled via build config 00:01:48.141 pdcp: explicitly disabled via build config 00:01:48.141 fib: explicitly disabled via build config 00:01:48.141 port: explicitly disabled via build config 00:01:48.141 pdump: explicitly disabled via build config 00:01:48.141 table: explicitly disabled via build config 00:01:48.141 pipeline: explicitly disabled via build config 00:01:48.141 graph: explicitly disabled via build config 00:01:48.141 node: explicitly disabled via build config 00:01:48.141 00:01:48.141 drivers: 00:01:48.141 common/cpt: not in enabled drivers build config 00:01:48.141 common/dpaax: not in enabled drivers build config 00:01:48.141 common/iavf: not in enabled drivers build config 00:01:48.141 common/idpf: not in enabled drivers build config 00:01:48.141 common/mvep: not in enabled drivers build config 00:01:48.141 common/octeontx: not in enabled drivers build config 00:01:48.141 bus/auxiliary: not in enabled drivers build config 00:01:48.141 bus/cdx: not in enabled drivers build config 00:01:48.141 bus/dpaa: not in enabled drivers build config 00:01:48.141 bus/fslmc: not in enabled drivers build config 00:01:48.141 bus/ifpga: not in enabled drivers build config 00:01:48.141 bus/platform: not in enabled drivers build config 00:01:48.141 bus/vmbus: not in enabled drivers build config 00:01:48.141 common/cnxk: not in enabled drivers build config 00:01:48.141 common/mlx5: not in enabled drivers build config 00:01:48.141 common/nfp: not in enabled drivers build config 00:01:48.141 common/qat: not in enabled drivers build config 00:01:48.141 common/sfc_efx: not in enabled drivers build config 00:01:48.141 mempool/bucket: not in enabled drivers build config 00:01:48.141 mempool/cnxk: not in enabled drivers build config 00:01:48.141 mempool/dpaa: not in enabled drivers build config 00:01:48.141 mempool/dpaa2: not in enabled drivers build config 00:01:48.141 mempool/octeontx: not in enabled drivers build config 00:01:48.141 mempool/stack: not in enabled drivers build config 00:01:48.141 dma/cnxk: not in enabled drivers build config 00:01:48.141 dma/dpaa: not in enabled drivers build config 00:01:48.141 dma/dpaa2: not in enabled drivers build config 00:01:48.141 dma/hisilicon: not in enabled drivers build config 00:01:48.141 dma/idxd: not in enabled drivers build config 00:01:48.141 dma/ioat: not in enabled drivers build config 00:01:48.141 dma/skeleton: not in enabled drivers build config 00:01:48.141 net/af_packet: not in enabled drivers build config 00:01:48.141 net/af_xdp: not in enabled drivers build config 00:01:48.141 net/ark: not in enabled drivers build config 00:01:48.141 net/atlantic: not in enabled drivers build config 00:01:48.141 net/avp: not in enabled drivers build config 00:01:48.141 net/axgbe: not in enabled drivers build config 00:01:48.141 net/bnx2x: not in enabled drivers build config 00:01:48.141 net/bnxt: not in enabled drivers build config 00:01:48.141 net/bonding: not in enabled drivers build config 00:01:48.141 net/cnxk: not in enabled drivers build config 00:01:48.141 net/cpfl: not in enabled drivers build config 00:01:48.141 net/cxgbe: not in enabled drivers build config 00:01:48.141 net/dpaa: not in enabled drivers build config 00:01:48.141 net/dpaa2: not in enabled drivers build config 00:01:48.141 net/e1000: not in enabled drivers build config 00:01:48.141 net/ena: not in enabled drivers build config 00:01:48.141 net/enetc: not in enabled drivers build config 00:01:48.141 net/enetfec: not in enabled drivers build config 00:01:48.141 net/enic: not in enabled drivers build config 00:01:48.141 net/failsafe: not in enabled drivers build config 00:01:48.141 net/fm10k: not in enabled drivers build config 00:01:48.141 net/gve: not in enabled drivers build config 00:01:48.141 net/hinic: not in enabled drivers build config 00:01:48.141 net/hns3: not in enabled drivers build config 00:01:48.141 net/i40e: not in enabled drivers build config 00:01:48.141 net/iavf: not in enabled drivers build config 00:01:48.141 net/ice: not in enabled drivers build config 00:01:48.141 net/idpf: not in enabled drivers build config 00:01:48.141 net/igc: not in enabled drivers build config 00:01:48.141 net/ionic: not in enabled drivers build config 00:01:48.141 net/ipn3ke: not in enabled drivers build config 00:01:48.141 net/ixgbe: not in enabled drivers build config 00:01:48.141 net/mana: not in enabled drivers build config 00:01:48.141 net/memif: not in enabled drivers build config 00:01:48.141 net/mlx4: not in enabled drivers build config 00:01:48.141 net/mlx5: not in enabled drivers build config 00:01:48.141 net/mvneta: not in enabled drivers build config 00:01:48.141 net/mvpp2: not in enabled drivers build config 00:01:48.141 net/netvsc: not in enabled drivers build config 00:01:48.141 net/nfb: not in enabled drivers build config 00:01:48.141 net/nfp: not in enabled drivers build config 00:01:48.141 net/ngbe: not in enabled drivers build config 00:01:48.141 net/null: not in enabled drivers build config 00:01:48.141 net/octeontx: not in enabled drivers build config 00:01:48.141 net/octeon_ep: not in enabled drivers build config 00:01:48.141 net/pcap: not in enabled drivers build config 00:01:48.141 net/pfe: not in enabled drivers build config 00:01:48.141 net/qede: not in enabled drivers build config 00:01:48.141 net/ring: not in enabled drivers build config 00:01:48.141 net/sfc: not in enabled drivers build config 00:01:48.141 net/softnic: not in enabled drivers build config 00:01:48.141 net/tap: not in enabled drivers build config 00:01:48.141 net/thunderx: not in enabled drivers build config 00:01:48.141 net/txgbe: not in enabled drivers build config 00:01:48.141 net/vdev_netvsc: not in enabled drivers build config 00:01:48.141 net/vhost: not in enabled drivers build config 00:01:48.141 net/virtio: not in enabled drivers build config 00:01:48.141 net/vmxnet3: not in enabled drivers build config 00:01:48.141 raw/*: missing internal dependency, "rawdev" 00:01:48.141 crypto/armv8: not in enabled drivers build config 00:01:48.141 crypto/bcmfs: not in enabled drivers build config 00:01:48.141 crypto/caam_jr: not in enabled drivers build config 00:01:48.141 crypto/ccp: not in enabled drivers build config 00:01:48.141 crypto/cnxk: not in enabled drivers build config 00:01:48.141 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.141 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.141 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.141 crypto/mlx5: not in enabled drivers build config 00:01:48.141 crypto/mvsam: not in enabled drivers build config 00:01:48.141 crypto/nitrox: not in enabled drivers build config 00:01:48.141 crypto/null: not in enabled drivers build config 00:01:48.141 crypto/octeontx: not in enabled drivers build config 00:01:48.141 crypto/openssl: not in enabled drivers build config 00:01:48.141 crypto/scheduler: not in enabled drivers build config 00:01:48.141 crypto/uadk: not in enabled drivers build config 00:01:48.141 crypto/virtio: not in enabled drivers build config 00:01:48.141 compress/isal: not in enabled drivers build config 00:01:48.141 compress/mlx5: not in enabled drivers build config 00:01:48.141 compress/octeontx: not in enabled drivers build config 00:01:48.141 compress/zlib: not in enabled drivers build config 00:01:48.141 regex/*: missing internal dependency, "regexdev" 00:01:48.141 ml/*: missing internal dependency, "mldev" 00:01:48.141 vdpa/ifc: not in enabled drivers build config 00:01:48.141 vdpa/mlx5: not in enabled drivers build config 00:01:48.141 vdpa/nfp: not in enabled drivers build config 00:01:48.141 vdpa/sfc: not in enabled drivers build config 00:01:48.141 event/*: missing internal dependency, "eventdev" 00:01:48.141 baseband/*: missing internal dependency, "bbdev" 00:01:48.141 gpu/*: missing internal dependency, "gpudev" 00:01:48.141 00:01:48.141 00:01:48.141 Build targets in project: 85 00:01:48.141 00:01:48.141 DPDK 23.11.0 00:01:48.141 00:01:48.141 User defined options 00:01:48.141 buildtype : debug 00:01:48.141 default_library : shared 00:01:48.141 libdir : lib 00:01:48.141 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:48.141 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:48.141 c_link_args : 00:01:48.141 cpu_instruction_set: native 00:01:48.142 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:48.142 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:48.142 enable_docs : false 00:01:48.142 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.142 enable_kmods : false 00:01:48.142 tests : false 00:01:48.142 00:01:48.142 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.710 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:48.710 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.710 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.710 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.710 [4/265] Linking static target lib/librte_kvargs.a 00:01:48.710 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.710 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.710 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.710 [8/265] Linking static target lib/librte_log.a 00:01:48.710 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.710 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.278 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.536 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.536 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.536 [14/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.536 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.536 [16/265] Linking target lib/librte_log.so.24.0 00:01:49.536 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.792 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:49.793 [19/265] Linking static target lib/librte_telemetry.a 00:01:49.793 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.793 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.793 [22/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:49.793 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.793 [24/265] Linking target lib/librte_kvargs.so.24.0 00:01:50.050 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.050 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.050 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:50.308 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.308 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.308 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.582 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.582 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.582 [33/265] Linking target lib/librte_telemetry.so.24.0 00:01:50.582 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.850 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.850 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.850 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:50.850 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.850 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.850 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.108 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.108 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.108 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.108 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.108 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.365 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.623 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.623 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.623 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.881 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.881 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.881 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.881 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.881 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.139 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.139 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.139 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.139 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.397 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.397 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.397 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.397 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.655 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.655 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.655 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.913 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.913 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.913 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.171 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.430 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.430 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.430 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.430 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.430 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.430 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.430 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.430 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.430 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.689 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.689 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.948 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.206 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.207 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.465 [84/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.465 [85/265] Linking static target lib/librte_rcu.a 00:01:54.465 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.465 [87/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.465 [88/265] Linking static target lib/librte_ring.a 00:01:54.465 [89/265] Linking static target lib/librte_eal.a 00:01:54.465 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.723 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.723 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.723 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.723 [94/265] Linking static target lib/librte_mempool.a 00:01:54.982 [95/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.982 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.982 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.982 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.551 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.551 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.551 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.551 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.551 [103/265] Linking static target lib/librte_mbuf.a 00:01:55.810 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.810 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.810 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.068 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:56.068 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.068 [109/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.068 [110/265] Linking static target lib/librte_net.a 00:01:56.327 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:56.327 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.327 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.327 [114/265] Linking static target lib/librte_meter.a 00:01:56.587 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.587 [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.587 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.587 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.846 [119/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.104 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:57.104 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.363 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:57.622 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:57.880 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.880 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.880 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.880 [127/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.880 [128/265] Linking static target lib/librte_pci.a 00:01:57.880 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.880 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.880 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:57.880 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.880 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.880 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:57.880 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.880 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.880 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.139 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.139 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.139 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.139 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.139 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.139 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.139 [144/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.139 [145/265] Linking static target lib/librte_ethdev.a 00:01:58.398 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.657 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.657 [148/265] Linking static target lib/librte_cmdline.a 00:01:58.657 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.915 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.916 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.916 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.916 [153/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.916 [154/265] Linking static target lib/librte_timer.a 00:01:58.916 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.174 [156/265] Linking static target lib/librte_hash.a 00:01:59.174 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:59.174 [158/265] Linking static target lib/librte_compressdev.a 00:01:59.434 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:59.434 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:59.693 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.693 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.693 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.693 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.951 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:59.951 [166/265] Linking static target lib/librte_dmadev.a 00:02:00.209 [167/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.209 [168/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.209 [169/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:00.209 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.209 [171/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.209 [172/265] Linking static target lib/librte_cryptodev.a 00:02:00.209 [173/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:00.209 [174/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.474 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.761 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.761 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:00.761 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:00.761 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.761 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.031 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.031 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.031 [183/265] Linking static target lib/librte_power.a 00:02:01.290 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.290 [185/265] Linking static target lib/librte_reorder.a 00:02:01.548 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:01.548 [187/265] Linking static target lib/librte_security.a 00:02:01.548 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.548 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.548 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:01.807 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.807 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.807 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.066 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.066 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.325 [196/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.325 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:02.325 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:02.325 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:02.583 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:02.583 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.583 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:02.841 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:02.841 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:02.841 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:02.841 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:02.841 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:03.100 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:03.100 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:03.100 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:03.100 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.100 [212/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:03.100 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.100 [214/265] Linking static target drivers/librte_bus_vdev.a 00:02:03.100 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.100 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.360 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:03.360 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:03.360 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:03.360 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:03.360 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:03.360 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:03.360 [223/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.360 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:03.619 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.556 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.556 [227/265] Linking static target lib/librte_vhost.a 00:02:05.124 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.124 [229/265] Linking target lib/librte_eal.so.24.0 00:02:05.124 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:05.124 [231/265] Linking target lib/librte_pci.so.24.0 00:02:05.124 [232/265] Linking target lib/librte_ring.so.24.0 00:02:05.124 [233/265] Linking target lib/librte_timer.so.24.0 00:02:05.124 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:05.124 [235/265] Linking target lib/librte_meter.so.24.0 00:02:05.124 [236/265] Linking target lib/librte_dmadev.so.24.0 00:02:05.382 [237/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.382 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:05.382 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:05.382 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:05.382 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:05.382 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:05.382 [243/265] Linking target lib/librte_rcu.so.24.0 00:02:05.382 [244/265] Linking target lib/librte_mempool.so.24.0 00:02:05.382 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:05.641 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:05.641 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:05.641 [248/265] Linking target lib/librte_mbuf.so.24.0 00:02:05.641 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:05.641 [250/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.641 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:05.900 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:05.900 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:05.900 [254/265] Linking target lib/librte_net.so.24.0 00:02:05.900 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:05.900 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:05.900 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:05.900 [258/265] Linking target lib/librte_security.so.24.0 00:02:06.159 [259/265] Linking target lib/librte_hash.so.24.0 00:02:06.159 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:06.159 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:06.159 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:06.159 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:06.159 [264/265] Linking target lib/librte_power.so.24.0 00:02:06.417 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:06.418 INFO: autodetecting backend as ninja 00:02:06.418 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:07.355 CC lib/ut_mock/mock.o 00:02:07.355 CC lib/ut/ut.o 00:02:07.355 CC lib/log/log.o 00:02:07.355 CC lib/log/log_deprecated.o 00:02:07.355 CC lib/log/log_flags.o 00:02:07.614 LIB libspdk_ut_mock.a 00:02:07.614 LIB libspdk_ut.a 00:02:07.615 SO libspdk_ut_mock.so.5.0 00:02:07.615 LIB libspdk_log.a 00:02:07.615 SO libspdk_ut.so.1.0 00:02:07.615 SO libspdk_log.so.6.1 00:02:07.615 SYMLINK libspdk_ut_mock.so 00:02:07.615 SYMLINK libspdk_ut.so 00:02:07.615 SYMLINK libspdk_log.so 00:02:07.874 CC lib/dma/dma.o 00:02:07.874 CC lib/util/base64.o 00:02:07.874 CC lib/util/bit_array.o 00:02:07.874 CC lib/util/cpuset.o 00:02:07.874 CC lib/util/crc16.o 00:02:07.874 CC lib/util/crc32.o 00:02:07.874 CC lib/util/crc32c.o 00:02:07.874 CC lib/ioat/ioat.o 00:02:07.874 CXX lib/trace_parser/trace.o 00:02:07.874 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.874 CC lib/util/crc32_ieee.o 00:02:07.874 CC lib/util/crc64.o 00:02:07.874 CC lib/util/dif.o 00:02:08.133 CC lib/util/fd.o 00:02:08.133 LIB libspdk_dma.a 00:02:08.133 CC lib/util/file.o 00:02:08.133 SO libspdk_dma.so.3.0 00:02:08.133 CC lib/util/hexlify.o 00:02:08.133 CC lib/util/iov.o 00:02:08.133 SYMLINK libspdk_dma.so 00:02:08.133 CC lib/util/math.o 00:02:08.133 CC lib/util/pipe.o 00:02:08.133 LIB libspdk_ioat.a 00:02:08.133 CC lib/vfio_user/host/vfio_user.o 00:02:08.133 SO libspdk_ioat.so.6.0 00:02:08.133 CC lib/util/strerror_tls.o 00:02:08.133 CC lib/util/string.o 00:02:08.133 SYMLINK libspdk_ioat.so 00:02:08.133 CC lib/util/uuid.o 00:02:08.133 CC lib/util/fd_group.o 00:02:08.392 CC lib/util/xor.o 00:02:08.392 CC lib/util/zipf.o 00:02:08.392 LIB libspdk_vfio_user.a 00:02:08.392 SO libspdk_vfio_user.so.4.0 00:02:08.392 SYMLINK libspdk_vfio_user.so 00:02:08.651 LIB libspdk_util.a 00:02:08.651 SO libspdk_util.so.8.0 00:02:08.651 SYMLINK libspdk_util.so 00:02:08.910 LIB libspdk_trace_parser.a 00:02:08.910 CC lib/vmd/vmd.o 00:02:08.910 CC lib/rdma/common.o 00:02:08.910 CC lib/idxd/idxd.o 00:02:08.910 CC lib/vmd/led.o 00:02:08.910 CC lib/idxd/idxd_user.o 00:02:08.910 CC lib/rdma/rdma_verbs.o 00:02:08.910 CC lib/env_dpdk/env.o 00:02:08.910 CC lib/json/json_parse.o 00:02:08.910 CC lib/conf/conf.o 00:02:08.910 SO libspdk_trace_parser.so.4.0 00:02:08.910 SYMLINK libspdk_trace_parser.so 00:02:08.910 CC lib/env_dpdk/memory.o 00:02:08.910 CC lib/env_dpdk/pci.o 00:02:09.169 CC lib/json/json_util.o 00:02:09.169 CC lib/json/json_write.o 00:02:09.169 CC lib/env_dpdk/init.o 00:02:09.169 LIB libspdk_conf.a 00:02:09.169 SO libspdk_conf.so.5.0 00:02:09.169 LIB libspdk_rdma.a 00:02:09.169 SO libspdk_rdma.so.5.0 00:02:09.169 SYMLINK libspdk_conf.so 00:02:09.169 CC lib/env_dpdk/threads.o 00:02:09.169 SYMLINK libspdk_rdma.so 00:02:09.169 CC lib/idxd/idxd_kernel.o 00:02:09.428 CC lib/env_dpdk/pci_ioat.o 00:02:09.428 CC lib/env_dpdk/pci_virtio.o 00:02:09.428 CC lib/env_dpdk/pci_vmd.o 00:02:09.428 CC lib/env_dpdk/pci_idxd.o 00:02:09.428 LIB libspdk_json.a 00:02:09.428 LIB libspdk_idxd.a 00:02:09.428 CC lib/env_dpdk/pci_event.o 00:02:09.428 SO libspdk_json.so.5.1 00:02:09.428 SO libspdk_idxd.so.11.0 00:02:09.428 CC lib/env_dpdk/sigbus_handler.o 00:02:09.428 CC lib/env_dpdk/pci_dpdk.o 00:02:09.428 LIB libspdk_vmd.a 00:02:09.428 SYMLINK libspdk_json.so 00:02:09.428 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.428 SYMLINK libspdk_idxd.so 00:02:09.428 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.686 SO libspdk_vmd.so.5.0 00:02:09.686 SYMLINK libspdk_vmd.so 00:02:09.686 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.686 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.686 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.686 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.944 LIB libspdk_jsonrpc.a 00:02:09.944 SO libspdk_jsonrpc.so.5.1 00:02:09.944 SYMLINK libspdk_jsonrpc.so 00:02:10.201 CC lib/rpc/rpc.o 00:02:10.201 LIB libspdk_env_dpdk.a 00:02:10.459 SO libspdk_env_dpdk.so.13.0 00:02:10.459 LIB libspdk_rpc.a 00:02:10.459 SO libspdk_rpc.so.5.0 00:02:10.459 SYMLINK libspdk_rpc.so 00:02:10.459 SYMLINK libspdk_env_dpdk.so 00:02:10.459 CC lib/sock/sock.o 00:02:10.459 CC lib/sock/sock_rpc.o 00:02:10.459 CC lib/notify/notify.o 00:02:10.459 CC lib/notify/notify_rpc.o 00:02:10.459 CC lib/trace/trace.o 00:02:10.459 CC lib/trace/trace_flags.o 00:02:10.459 CC lib/trace/trace_rpc.o 00:02:10.718 LIB libspdk_notify.a 00:02:10.718 SO libspdk_notify.so.5.0 00:02:10.718 LIB libspdk_trace.a 00:02:10.718 SYMLINK libspdk_notify.so 00:02:10.718 SO libspdk_trace.so.9.0 00:02:10.977 SYMLINK libspdk_trace.so 00:02:10.977 LIB libspdk_sock.a 00:02:10.977 SO libspdk_sock.so.8.0 00:02:10.977 SYMLINK libspdk_sock.so 00:02:10.977 CC lib/thread/thread.o 00:02:10.977 CC lib/thread/iobuf.o 00:02:11.236 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.236 CC lib/nvme/nvme_ctrlr.o 00:02:11.236 CC lib/nvme/nvme_ns.o 00:02:11.236 CC lib/nvme/nvme_fabric.o 00:02:11.236 CC lib/nvme/nvme_ns_cmd.o 00:02:11.236 CC lib/nvme/nvme_qpair.o 00:02:11.236 CC lib/nvme/nvme_pcie.o 00:02:11.236 CC lib/nvme/nvme_pcie_common.o 00:02:11.494 CC lib/nvme/nvme.o 00:02:12.061 CC lib/nvme/nvme_quirks.o 00:02:12.061 CC lib/nvme/nvme_transport.o 00:02:12.061 CC lib/nvme/nvme_discovery.o 00:02:12.061 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:12.061 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:12.061 CC lib/nvme/nvme_tcp.o 00:02:12.319 CC lib/nvme/nvme_opal.o 00:02:12.319 CC lib/nvme/nvme_io_msg.o 00:02:12.578 CC lib/nvme/nvme_poll_group.o 00:02:12.578 CC lib/nvme/nvme_zns.o 00:02:12.578 LIB libspdk_thread.a 00:02:12.578 CC lib/nvme/nvme_cuse.o 00:02:12.578 SO libspdk_thread.so.9.0 00:02:12.578 CC lib/nvme/nvme_vfio_user.o 00:02:12.578 CC lib/nvme/nvme_rdma.o 00:02:12.837 SYMLINK libspdk_thread.so 00:02:12.837 CC lib/accel/accel.o 00:02:12.837 CC lib/blob/blobstore.o 00:02:13.096 CC lib/init/json_config.o 00:02:13.096 CC lib/accel/accel_rpc.o 00:02:13.355 CC lib/init/subsystem.o 00:02:13.355 CC lib/blob/request.o 00:02:13.355 CC lib/blob/zeroes.o 00:02:13.355 CC lib/accel/accel_sw.o 00:02:13.355 CC lib/virtio/virtio.o 00:02:13.355 CC lib/init/subsystem_rpc.o 00:02:13.614 CC lib/blob/blob_bs_dev.o 00:02:13.614 CC lib/init/rpc.o 00:02:13.614 CC lib/vfu_tgt/tgt_endpoint.o 00:02:13.614 CC lib/virtio/virtio_vhost_user.o 00:02:13.614 CC lib/virtio/virtio_vfio_user.o 00:02:13.614 CC lib/virtio/virtio_pci.o 00:02:13.614 CC lib/vfu_tgt/tgt_rpc.o 00:02:13.614 LIB libspdk_init.a 00:02:13.873 SO libspdk_init.so.4.0 00:02:13.873 LIB libspdk_accel.a 00:02:13.873 SYMLINK libspdk_init.so 00:02:13.873 SO libspdk_accel.so.14.0 00:02:13.873 LIB libspdk_vfu_tgt.a 00:02:13.873 SYMLINK libspdk_accel.so 00:02:13.873 LIB libspdk_virtio.a 00:02:13.873 CC lib/event/app.o 00:02:13.873 CC lib/event/reactor.o 00:02:13.873 SO libspdk_vfu_tgt.so.2.0 00:02:13.873 CC lib/event/log_rpc.o 00:02:13.873 CC lib/event/app_rpc.o 00:02:13.873 CC lib/event/scheduler_static.o 00:02:13.873 SO libspdk_virtio.so.6.0 00:02:14.132 SYMLINK libspdk_vfu_tgt.so 00:02:14.132 LIB libspdk_nvme.a 00:02:14.132 SYMLINK libspdk_virtio.so 00:02:14.132 CC lib/bdev/bdev_rpc.o 00:02:14.132 CC lib/bdev/bdev.o 00:02:14.132 CC lib/bdev/bdev_zone.o 00:02:14.132 CC lib/bdev/part.o 00:02:14.132 CC lib/bdev/scsi_nvme.o 00:02:14.132 SO libspdk_nvme.so.12.0 00:02:14.391 LIB libspdk_event.a 00:02:14.391 SO libspdk_event.so.12.0 00:02:14.391 SYMLINK libspdk_nvme.so 00:02:14.391 SYMLINK libspdk_event.so 00:02:15.328 LIB libspdk_blob.a 00:02:15.328 SO libspdk_blob.so.10.1 00:02:15.587 SYMLINK libspdk_blob.so 00:02:15.587 CC lib/lvol/lvol.o 00:02:15.587 CC lib/blobfs/tree.o 00:02:15.587 CC lib/blobfs/blobfs.o 00:02:16.524 LIB libspdk_bdev.a 00:02:16.524 LIB libspdk_blobfs.a 00:02:16.524 SO libspdk_blobfs.so.9.0 00:02:16.524 SO libspdk_bdev.so.14.0 00:02:16.524 SYMLINK libspdk_blobfs.so 00:02:16.524 LIB libspdk_lvol.a 00:02:16.524 SO libspdk_lvol.so.9.1 00:02:16.524 SYMLINK libspdk_bdev.so 00:02:16.524 SYMLINK libspdk_lvol.so 00:02:16.782 CC lib/scsi/dev.o 00:02:16.782 CC lib/scsi/port.o 00:02:16.782 CC lib/scsi/scsi.o 00:02:16.782 CC lib/scsi/lun.o 00:02:16.782 CC lib/scsi/scsi_bdev.o 00:02:16.782 CC lib/scsi/scsi_pr.o 00:02:16.782 CC lib/nbd/nbd.o 00:02:16.782 CC lib/ublk/ublk.o 00:02:16.782 CC lib/nvmf/ctrlr.o 00:02:16.782 CC lib/ftl/ftl_core.o 00:02:16.782 CC lib/ftl/ftl_init.o 00:02:17.041 CC lib/ftl/ftl_layout.o 00:02:17.041 CC lib/ftl/ftl_debug.o 00:02:17.041 CC lib/nvmf/ctrlr_discovery.o 00:02:17.041 CC lib/nvmf/ctrlr_bdev.o 00:02:17.041 CC lib/nbd/nbd_rpc.o 00:02:17.041 CC lib/nvmf/subsystem.o 00:02:17.041 CC lib/ftl/ftl_io.o 00:02:17.299 CC lib/scsi/scsi_rpc.o 00:02:17.299 CC lib/scsi/task.o 00:02:17.299 CC lib/ftl/ftl_sb.o 00:02:17.299 LIB libspdk_nbd.a 00:02:17.299 CC lib/ublk/ublk_rpc.o 00:02:17.299 SO libspdk_nbd.so.6.0 00:02:17.299 CC lib/ftl/ftl_l2p.o 00:02:17.299 SYMLINK libspdk_nbd.so 00:02:17.299 CC lib/nvmf/nvmf.o 00:02:17.299 CC lib/nvmf/nvmf_rpc.o 00:02:17.557 LIB libspdk_scsi.a 00:02:17.557 LIB libspdk_ublk.a 00:02:17.557 CC lib/ftl/ftl_l2p_flat.o 00:02:17.557 SO libspdk_scsi.so.8.0 00:02:17.557 SO libspdk_ublk.so.2.0 00:02:17.557 CC lib/nvmf/transport.o 00:02:17.557 CC lib/nvmf/tcp.o 00:02:17.557 SYMLINK libspdk_scsi.so 00:02:17.557 CC lib/nvmf/vfio_user.o 00:02:17.557 SYMLINK libspdk_ublk.so 00:02:17.557 CC lib/nvmf/rdma.o 00:02:17.816 CC lib/ftl/ftl_nv_cache.o 00:02:17.816 CC lib/ftl/ftl_band.o 00:02:18.075 CC lib/ftl/ftl_band_ops.o 00:02:18.334 CC lib/ftl/ftl_writer.o 00:02:18.334 CC lib/vhost/vhost.o 00:02:18.334 CC lib/ftl/ftl_rq.o 00:02:18.334 CC lib/iscsi/conn.o 00:02:18.334 CC lib/iscsi/init_grp.o 00:02:18.334 CC lib/iscsi/iscsi.o 00:02:18.593 CC lib/ftl/ftl_reloc.o 00:02:18.593 CC lib/ftl/ftl_l2p_cache.o 00:02:18.593 CC lib/ftl/ftl_p2l.o 00:02:18.593 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.853 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.853 CC lib/vhost/vhost_rpc.o 00:02:19.112 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.112 CC lib/vhost/vhost_scsi.o 00:02:19.112 CC lib/vhost/vhost_blk.o 00:02:19.112 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.112 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.112 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.112 CC lib/iscsi/md5.o 00:02:19.112 CC lib/iscsi/param.o 00:02:19.372 CC lib/iscsi/portal_grp.o 00:02:19.372 CC lib/iscsi/tgt_node.o 00:02:19.372 CC lib/iscsi/iscsi_subsystem.o 00:02:19.372 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.630 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.630 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.630 LIB libspdk_nvmf.a 00:02:19.630 CC lib/vhost/rte_vhost_user.o 00:02:19.630 CC lib/iscsi/iscsi_rpc.o 00:02:19.630 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.630 SO libspdk_nvmf.so.17.0 00:02:19.888 CC lib/iscsi/task.o 00:02:19.888 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.888 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.888 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.888 SYMLINK libspdk_nvmf.so 00:02:19.888 CC lib/ftl/utils/ftl_conf.o 00:02:19.888 CC lib/ftl/utils/ftl_md.o 00:02:19.888 CC lib/ftl/utils/ftl_mempool.o 00:02:19.888 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.888 LIB libspdk_iscsi.a 00:02:19.888 CC lib/ftl/utils/ftl_property.o 00:02:20.146 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:20.146 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:20.146 SO libspdk_iscsi.so.7.0 00:02:20.146 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:20.146 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:20.146 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:20.146 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:20.146 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:20.146 SYMLINK libspdk_iscsi.so 00:02:20.146 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:20.146 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:20.405 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:20.405 CC lib/ftl/base/ftl_base_dev.o 00:02:20.405 CC lib/ftl/base/ftl_base_bdev.o 00:02:20.405 CC lib/ftl/ftl_trace.o 00:02:20.663 LIB libspdk_ftl.a 00:02:20.663 LIB libspdk_vhost.a 00:02:20.663 SO libspdk_vhost.so.7.1 00:02:20.922 SO libspdk_ftl.so.8.0 00:02:20.922 SYMLINK libspdk_vhost.so 00:02:20.922 SYMLINK libspdk_ftl.so 00:02:21.180 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.180 CC module/vfu_device/vfu_virtio.o 00:02:21.180 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.180 CC module/sock/posix/posix.o 00:02:21.181 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:21.181 CC module/sock/uring/uring.o 00:02:21.181 CC module/accel/error/accel_error.o 00:02:21.181 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.181 CC module/blob/bdev/blob_bdev.o 00:02:21.181 CC module/accel/ioat/accel_ioat.o 00:02:21.440 LIB libspdk_env_dpdk_rpc.a 00:02:21.440 SO libspdk_env_dpdk_rpc.so.5.0 00:02:21.440 LIB libspdk_scheduler_gscheduler.a 00:02:21.440 LIB libspdk_scheduler_dpdk_governor.a 00:02:21.440 SYMLINK libspdk_env_dpdk_rpc.so 00:02:21.440 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.440 SO libspdk_scheduler_gscheduler.so.3.0 00:02:21.440 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:21.440 CC module/accel/error/accel_error_rpc.o 00:02:21.440 LIB libspdk_scheduler_dynamic.a 00:02:21.440 SO libspdk_scheduler_dynamic.so.3.0 00:02:21.440 SYMLINK libspdk_scheduler_gscheduler.so 00:02:21.440 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:21.440 CC module/vfu_device/vfu_virtio_blk.o 00:02:21.440 LIB libspdk_blob_bdev.a 00:02:21.440 SYMLINK libspdk_scheduler_dynamic.so 00:02:21.440 CC module/vfu_device/vfu_virtio_scsi.o 00:02:21.440 SO libspdk_blob_bdev.so.10.1 00:02:21.440 LIB libspdk_accel_ioat.a 00:02:21.699 LIB libspdk_accel_error.a 00:02:21.699 CC module/accel/dsa/accel_dsa.o 00:02:21.699 SO libspdk_accel_ioat.so.5.0 00:02:21.699 CC module/accel/iaa/accel_iaa.o 00:02:21.699 SO libspdk_accel_error.so.1.0 00:02:21.699 SYMLINK libspdk_blob_bdev.so 00:02:21.699 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.699 SYMLINK libspdk_accel_ioat.so 00:02:21.699 CC module/vfu_device/vfu_virtio_rpc.o 00:02:21.699 SYMLINK libspdk_accel_error.so 00:02:21.699 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.699 LIB libspdk_accel_iaa.a 00:02:21.958 SO libspdk_accel_iaa.so.2.0 00:02:21.958 LIB libspdk_accel_dsa.a 00:02:21.958 SO libspdk_accel_dsa.so.4.0 00:02:21.958 SYMLINK libspdk_accel_iaa.so 00:02:21.958 LIB libspdk_vfu_device.a 00:02:21.958 LIB libspdk_sock_uring.a 00:02:21.958 CC module/bdev/delay/vbdev_delay.o 00:02:21.958 CC module/bdev/error/vbdev_error.o 00:02:21.958 CC module/bdev/gpt/gpt.o 00:02:21.958 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.958 SYMLINK libspdk_accel_dsa.so 00:02:21.958 SO libspdk_vfu_device.so.2.0 00:02:21.958 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.958 SO libspdk_sock_uring.so.4.0 00:02:21.958 LIB libspdk_sock_posix.a 00:02:21.958 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.958 CC module/bdev/malloc/bdev_malloc.o 00:02:21.958 SO libspdk_sock_posix.so.5.0 00:02:21.958 SYMLINK libspdk_sock_uring.so 00:02:21.958 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.958 SYMLINK libspdk_vfu_device.so 00:02:21.958 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.217 SYMLINK libspdk_sock_posix.so 00:02:22.217 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.217 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.217 LIB libspdk_bdev_error.a 00:02:22.217 LIB libspdk_bdev_gpt.a 00:02:22.217 CC module/bdev/null/bdev_null.o 00:02:22.217 SO libspdk_bdev_error.so.5.0 00:02:22.217 SO libspdk_bdev_gpt.so.5.0 00:02:22.217 LIB libspdk_blobfs_bdev.a 00:02:22.217 CC module/bdev/nvme/bdev_nvme.o 00:02:22.217 SO libspdk_blobfs_bdev.so.5.0 00:02:22.217 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.217 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.217 SYMLINK libspdk_bdev_gpt.so 00:02:22.217 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.217 SYMLINK libspdk_bdev_error.so 00:02:22.475 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.475 LIB libspdk_bdev_malloc.a 00:02:22.475 SYMLINK libspdk_blobfs_bdev.so 00:02:22.475 SO libspdk_bdev_malloc.so.5.0 00:02:22.476 SYMLINK libspdk_bdev_malloc.so 00:02:22.476 CC module/bdev/null/bdev_null_rpc.o 00:02:22.476 LIB libspdk_bdev_lvol.a 00:02:22.476 CC module/bdev/raid/bdev_raid.o 00:02:22.476 LIB libspdk_bdev_delay.a 00:02:22.476 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.476 SO libspdk_bdev_lvol.so.5.0 00:02:22.476 SO libspdk_bdev_delay.so.5.0 00:02:22.476 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.476 CC module/bdev/split/vbdev_split.o 00:02:22.476 SYMLINK libspdk_bdev_delay.so 00:02:22.476 SYMLINK libspdk_bdev_lvol.so 00:02:22.733 LIB libspdk_bdev_null.a 00:02:22.733 LIB libspdk_bdev_passthru.a 00:02:22.733 SO libspdk_bdev_null.so.5.0 00:02:22.733 SO libspdk_bdev_passthru.so.5.0 00:02:22.733 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.733 CC module/bdev/uring/bdev_uring.o 00:02:22.733 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.733 SYMLINK libspdk_bdev_passthru.so 00:02:22.733 SYMLINK libspdk_bdev_null.so 00:02:22.733 CC module/bdev/nvme/nvme_rpc.o 00:02:22.733 CC module/bdev/uring/bdev_uring_rpc.o 00:02:22.733 CC module/bdev/raid/raid0.o 00:02:22.733 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.991 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.991 CC module/bdev/nvme/vbdev_opal.o 00:02:22.991 LIB libspdk_bdev_split.a 00:02:22.991 LIB libspdk_bdev_zone_block.a 00:02:22.991 SO libspdk_bdev_split.so.5.0 00:02:22.991 CC module/bdev/aio/bdev_aio.o 00:02:22.991 SO libspdk_bdev_zone_block.so.5.0 00:02:22.991 LIB libspdk_bdev_uring.a 00:02:22.991 CC module/bdev/raid/raid1.o 00:02:23.248 CC module/bdev/ftl/bdev_ftl.o 00:02:23.248 SO libspdk_bdev_uring.so.5.0 00:02:23.248 CC module/bdev/iscsi/bdev_iscsi.o 00:02:23.248 SYMLINK libspdk_bdev_zone_block.so 00:02:23.248 SYMLINK libspdk_bdev_split.so 00:02:23.248 CC module/bdev/raid/concat.o 00:02:23.248 SYMLINK libspdk_bdev_uring.so 00:02:23.248 CC module/bdev/aio/bdev_aio_rpc.o 00:02:23.248 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:23.248 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:23.248 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:23.248 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:23.506 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.506 LIB libspdk_bdev_raid.a 00:02:23.506 LIB libspdk_bdev_aio.a 00:02:23.506 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:23.506 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:23.506 SO libspdk_bdev_raid.so.5.0 00:02:23.506 SO libspdk_bdev_aio.so.5.0 00:02:23.506 SYMLINK libspdk_bdev_aio.so 00:02:23.506 LIB libspdk_bdev_iscsi.a 00:02:23.506 SYMLINK libspdk_bdev_raid.so 00:02:23.506 SO libspdk_bdev_iscsi.so.5.0 00:02:23.506 SYMLINK libspdk_bdev_iscsi.so 00:02:23.767 LIB libspdk_bdev_ftl.a 00:02:23.767 SO libspdk_bdev_ftl.so.5.0 00:02:23.767 SYMLINK libspdk_bdev_ftl.so 00:02:23.767 LIB libspdk_bdev_virtio.a 00:02:23.767 SO libspdk_bdev_virtio.so.5.0 00:02:23.767 SYMLINK libspdk_bdev_virtio.so 00:02:24.377 LIB libspdk_bdev_nvme.a 00:02:24.377 SO libspdk_bdev_nvme.so.6.0 00:02:24.635 SYMLINK libspdk_bdev_nvme.so 00:02:24.894 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.894 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.894 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.894 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.894 CC module/event/subsystems/vmd/vmd.o 00:02:24.894 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.894 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:24.894 CC module/event/subsystems/sock/sock.o 00:02:24.894 LIB libspdk_event_scheduler.a 00:02:24.894 LIB libspdk_event_sock.a 00:02:24.894 LIB libspdk_event_vhost_blk.a 00:02:24.894 SO libspdk_event_scheduler.so.3.0 00:02:24.894 SO libspdk_event_sock.so.4.0 00:02:25.153 LIB libspdk_event_vfu_tgt.a 00:02:25.153 LIB libspdk_event_vmd.a 00:02:25.153 SO libspdk_event_vhost_blk.so.2.0 00:02:25.153 LIB libspdk_event_iobuf.a 00:02:25.153 SO libspdk_event_vfu_tgt.so.2.0 00:02:25.153 SO libspdk_event_vmd.so.5.0 00:02:25.153 SO libspdk_event_iobuf.so.2.0 00:02:25.153 SYMLINK libspdk_event_sock.so 00:02:25.153 SYMLINK libspdk_event_scheduler.so 00:02:25.153 SYMLINK libspdk_event_vhost_blk.so 00:02:25.153 SYMLINK libspdk_event_vfu_tgt.so 00:02:25.153 SYMLINK libspdk_event_vmd.so 00:02:25.153 SYMLINK libspdk_event_iobuf.so 00:02:25.411 CC module/event/subsystems/accel/accel.o 00:02:25.412 LIB libspdk_event_accel.a 00:02:25.412 SO libspdk_event_accel.so.5.0 00:02:25.412 SYMLINK libspdk_event_accel.so 00:02:25.669 CC module/event/subsystems/bdev/bdev.o 00:02:25.927 LIB libspdk_event_bdev.a 00:02:25.927 SO libspdk_event_bdev.so.5.0 00:02:25.927 SYMLINK libspdk_event_bdev.so 00:02:26.185 CC module/event/subsystems/ublk/ublk.o 00:02:26.185 CC module/event/subsystems/nbd/nbd.o 00:02:26.185 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:26.185 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:26.185 CC module/event/subsystems/scsi/scsi.o 00:02:26.185 LIB libspdk_event_nbd.a 00:02:26.185 LIB libspdk_event_ublk.a 00:02:26.185 LIB libspdk_event_scsi.a 00:02:26.185 SO libspdk_event_nbd.so.5.0 00:02:26.443 SO libspdk_event_ublk.so.2.0 00:02:26.443 SO libspdk_event_scsi.so.5.0 00:02:26.443 SYMLINK libspdk_event_nbd.so 00:02:26.443 SYMLINK libspdk_event_ublk.so 00:02:26.443 SYMLINK libspdk_event_scsi.so 00:02:26.443 LIB libspdk_event_nvmf.a 00:02:26.443 SO libspdk_event_nvmf.so.5.0 00:02:26.443 SYMLINK libspdk_event_nvmf.so 00:02:26.443 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.443 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.700 LIB libspdk_event_vhost_scsi.a 00:02:26.700 LIB libspdk_event_iscsi.a 00:02:26.700 SO libspdk_event_vhost_scsi.so.2.0 00:02:26.700 SO libspdk_event_iscsi.so.5.0 00:02:26.700 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.700 SYMLINK libspdk_event_iscsi.so 00:02:26.958 SO libspdk.so.5.0 00:02:26.958 SYMLINK libspdk.so 00:02:27.216 CXX app/trace/trace.o 00:02:27.216 CC examples/sock/hello_world/hello_sock.o 00:02:27.216 CC examples/ioat/perf/perf.o 00:02:27.216 CC examples/vmd/lsvmd/lsvmd.o 00:02:27.216 CC examples/nvme/hello_world/hello_world.o 00:02:27.216 CC examples/accel/perf/accel_perf.o 00:02:27.216 CC examples/nvmf/nvmf/nvmf.o 00:02:27.216 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.216 CC examples/blob/hello_world/hello_blob.o 00:02:27.216 CC test/accel/dif/dif.o 00:02:27.216 LINK lsvmd 00:02:27.475 LINK ioat_perf 00:02:27.475 LINK hello_sock 00:02:27.475 LINK hello_world 00:02:27.475 LINK hello_blob 00:02:27.475 LINK hello_bdev 00:02:27.475 LINK nvmf 00:02:27.475 CC examples/vmd/led/led.o 00:02:27.475 LINK spdk_trace 00:02:27.733 CC examples/ioat/verify/verify.o 00:02:27.733 LINK accel_perf 00:02:27.733 LINK dif 00:02:27.733 CC examples/nvme/reconnect/reconnect.o 00:02:27.733 LINK led 00:02:27.733 CC examples/util/zipf/zipf.o 00:02:27.733 CC examples/blob/cli/blobcli.o 00:02:27.733 CC app/trace_record/trace_record.o 00:02:27.992 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.992 LINK verify 00:02:27.992 LINK zipf 00:02:27.992 CC examples/thread/thread/thread_ex.o 00:02:27.992 CC app/nvmf_tgt/nvmf_main.o 00:02:27.992 CC test/app/bdev_svc/bdev_svc.o 00:02:27.992 LINK reconnect 00:02:27.992 CC test/bdev/bdevio/bdevio.o 00:02:27.992 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:27.992 LINK spdk_trace_record 00:02:28.249 CC examples/idxd/perf/perf.o 00:02:28.249 LINK nvmf_tgt 00:02:28.249 LINK bdev_svc 00:02:28.249 LINK thread 00:02:28.249 CC examples/nvme/arbitration/arbitration.o 00:02:28.249 LINK blobcli 00:02:28.249 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.506 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.506 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.506 LINK bdevio 00:02:28.506 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.507 LINK idxd_perf 00:02:28.507 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.507 LINK nvme_manage 00:02:28.507 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.507 LINK bdevperf 00:02:28.507 LINK arbitration 00:02:28.765 LINK iscsi_tgt 00:02:28.765 CC test/app/histogram_perf/histogram_perf.o 00:02:28.765 LINK nvme_fuzz 00:02:28.765 LINK interrupt_tgt 00:02:28.765 CC examples/nvme/hotplug/hotplug.o 00:02:28.765 CC test/blobfs/mkfs/mkfs.o 00:02:28.765 CC test/app/jsoncat/jsoncat.o 00:02:28.765 CC test/app/stub/stub.o 00:02:28.765 LINK histogram_perf 00:02:29.024 TEST_HEADER include/spdk/accel.h 00:02:29.024 TEST_HEADER include/spdk/accel_module.h 00:02:29.024 TEST_HEADER include/spdk/assert.h 00:02:29.024 TEST_HEADER include/spdk/barrier.h 00:02:29.024 TEST_HEADER include/spdk/base64.h 00:02:29.024 TEST_HEADER include/spdk/bdev.h 00:02:29.024 TEST_HEADER include/spdk/bdev_module.h 00:02:29.024 TEST_HEADER include/spdk/bdev_zone.h 00:02:29.024 TEST_HEADER include/spdk/bit_array.h 00:02:29.024 TEST_HEADER include/spdk/bit_pool.h 00:02:29.024 TEST_HEADER include/spdk/blob_bdev.h 00:02:29.024 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:29.024 TEST_HEADER include/spdk/blobfs.h 00:02:29.024 TEST_HEADER include/spdk/blob.h 00:02:29.024 TEST_HEADER include/spdk/conf.h 00:02:29.024 LINK jsoncat 00:02:29.024 TEST_HEADER include/spdk/config.h 00:02:29.024 TEST_HEADER include/spdk/cpuset.h 00:02:29.024 TEST_HEADER include/spdk/crc16.h 00:02:29.024 CC app/spdk_lspci/spdk_lspci.o 00:02:29.024 TEST_HEADER include/spdk/crc32.h 00:02:29.024 TEST_HEADER include/spdk/crc64.h 00:02:29.024 TEST_HEADER include/spdk/dif.h 00:02:29.024 LINK mkfs 00:02:29.024 TEST_HEADER include/spdk/dma.h 00:02:29.024 TEST_HEADER include/spdk/endian.h 00:02:29.024 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.024 LINK vhost_fuzz 00:02:29.024 TEST_HEADER include/spdk/env.h 00:02:29.024 TEST_HEADER include/spdk/event.h 00:02:29.024 TEST_HEADER include/spdk/fd_group.h 00:02:29.024 TEST_HEADER include/spdk/fd.h 00:02:29.024 TEST_HEADER include/spdk/file.h 00:02:29.024 TEST_HEADER include/spdk/ftl.h 00:02:29.024 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.024 TEST_HEADER include/spdk/hexlify.h 00:02:29.024 TEST_HEADER include/spdk/histogram_data.h 00:02:29.024 TEST_HEADER include/spdk/idxd.h 00:02:29.024 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.024 TEST_HEADER include/spdk/init.h 00:02:29.024 TEST_HEADER include/spdk/ioat.h 00:02:29.024 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.024 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.024 CC app/spdk_tgt/spdk_tgt.o 00:02:29.024 TEST_HEADER include/spdk/json.h 00:02:29.024 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.025 TEST_HEADER include/spdk/likely.h 00:02:29.025 TEST_HEADER include/spdk/log.h 00:02:29.025 LINK hotplug 00:02:29.025 TEST_HEADER include/spdk/lvol.h 00:02:29.025 TEST_HEADER include/spdk/memory.h 00:02:29.025 TEST_HEADER include/spdk/mmio.h 00:02:29.025 TEST_HEADER include/spdk/nbd.h 00:02:29.025 TEST_HEADER include/spdk/notify.h 00:02:29.025 TEST_HEADER include/spdk/nvme.h 00:02:29.025 LINK stub 00:02:29.025 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.025 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.025 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.025 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.025 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.025 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.025 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.025 TEST_HEADER include/spdk/nvmf.h 00:02:29.025 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.025 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.025 TEST_HEADER include/spdk/opal.h 00:02:29.025 TEST_HEADER include/spdk/opal_spec.h 00:02:29.025 TEST_HEADER include/spdk/pci_ids.h 00:02:29.025 CC app/spdk_nvme_perf/perf.o 00:02:29.025 TEST_HEADER include/spdk/pipe.h 00:02:29.025 TEST_HEADER include/spdk/queue.h 00:02:29.025 TEST_HEADER include/spdk/reduce.h 00:02:29.025 TEST_HEADER include/spdk/rpc.h 00:02:29.025 TEST_HEADER include/spdk/scheduler.h 00:02:29.025 TEST_HEADER include/spdk/scsi.h 00:02:29.025 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.025 TEST_HEADER include/spdk/sock.h 00:02:29.025 TEST_HEADER include/spdk/stdinc.h 00:02:29.025 TEST_HEADER include/spdk/string.h 00:02:29.025 TEST_HEADER include/spdk/thread.h 00:02:29.025 TEST_HEADER include/spdk/trace.h 00:02:29.025 TEST_HEADER include/spdk/trace_parser.h 00:02:29.025 TEST_HEADER include/spdk/tree.h 00:02:29.025 TEST_HEADER include/spdk/ublk.h 00:02:29.025 TEST_HEADER include/spdk/util.h 00:02:29.025 TEST_HEADER include/spdk/uuid.h 00:02:29.025 TEST_HEADER include/spdk/version.h 00:02:29.025 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.025 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.025 TEST_HEADER include/spdk/vhost.h 00:02:29.025 TEST_HEADER include/spdk/vmd.h 00:02:29.025 TEST_HEADER include/spdk/xor.h 00:02:29.025 TEST_HEADER include/spdk/zipf.h 00:02:29.025 CXX test/cpp_headers/accel.o 00:02:29.025 LINK spdk_lspci 00:02:29.284 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.284 CC app/spdk_nvme_identify/identify.o 00:02:29.284 CC app/spdk_nvme_discover/discovery_aer.o 00:02:29.284 CC app/spdk_top/spdk_top.o 00:02:29.284 LINK spdk_tgt 00:02:29.284 CXX test/cpp_headers/accel_module.o 00:02:29.284 CC app/vhost/vhost.o 00:02:29.284 LINK cmb_copy 00:02:29.284 CC app/spdk_dd/spdk_dd.o 00:02:29.542 CXX test/cpp_headers/assert.o 00:02:29.542 LINK spdk_nvme_discover 00:02:29.542 LINK vhost 00:02:29.542 CC app/fio/nvme/fio_plugin.o 00:02:29.542 CC examples/nvme/abort/abort.o 00:02:29.542 CXX test/cpp_headers/barrier.o 00:02:29.800 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.800 CXX test/cpp_headers/base64.o 00:02:29.800 LINK spdk_dd 00:02:29.800 CXX test/cpp_headers/bdev.o 00:02:29.800 LINK pmr_persistence 00:02:29.800 LINK spdk_nvme_perf 00:02:30.059 CXX test/cpp_headers/bdev_module.o 00:02:30.059 CC test/dma/test_dma/test_dma.o 00:02:30.059 LINK spdk_nvme_identify 00:02:30.059 LINK abort 00:02:30.059 LINK iscsi_fuzz 00:02:30.059 LINK spdk_top 00:02:30.059 LINK spdk_nvme 00:02:30.059 CXX test/cpp_headers/bdev_zone.o 00:02:30.059 CC test/event/event_perf/event_perf.o 00:02:30.318 CC test/env/vtophys/vtophys.o 00:02:30.318 CC test/env/mem_callbacks/mem_callbacks.o 00:02:30.318 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.318 CC test/lvol/esnap/esnap.o 00:02:30.318 CC test/env/memory/memory_ut.o 00:02:30.318 CC app/fio/bdev/fio_plugin.o 00:02:30.318 LINK event_perf 00:02:30.318 CXX test/cpp_headers/bit_array.o 00:02:30.318 LINK test_dma 00:02:30.318 LINK vtophys 00:02:30.318 CC test/env/pci/pci_ut.o 00:02:30.318 LINK env_dpdk_post_init 00:02:30.576 CXX test/cpp_headers/bit_pool.o 00:02:30.576 CC test/event/reactor/reactor.o 00:02:30.576 CC test/event/reactor_perf/reactor_perf.o 00:02:30.576 CC test/event/app_repeat/app_repeat.o 00:02:30.576 CC test/event/scheduler/scheduler.o 00:02:30.576 CXX test/cpp_headers/blob_bdev.o 00:02:30.576 LINK reactor 00:02:30.835 LINK reactor_perf 00:02:30.835 LINK pci_ut 00:02:30.835 LINK app_repeat 00:02:30.835 LINK spdk_bdev 00:02:30.835 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.835 LINK mem_callbacks 00:02:30.835 LINK scheduler 00:02:30.835 CC test/rpc_client/rpc_client_test.o 00:02:30.835 CXX test/cpp_headers/blobfs.o 00:02:31.094 CC test/nvme/aer/aer.o 00:02:31.094 CC test/nvme/reset/reset.o 00:02:31.094 CXX test/cpp_headers/blob.o 00:02:31.094 CC test/nvme/sgl/sgl.o 00:02:31.094 CC test/thread/poller_perf/poller_perf.o 00:02:31.094 LINK rpc_client_test 00:02:31.094 CC test/nvme/e2edp/nvme_dp.o 00:02:31.094 CC test/nvme/overhead/overhead.o 00:02:31.094 CXX test/cpp_headers/conf.o 00:02:31.094 LINK memory_ut 00:02:31.352 LINK poller_perf 00:02:31.352 LINK reset 00:02:31.352 CXX test/cpp_headers/config.o 00:02:31.352 LINK aer 00:02:31.352 CXX test/cpp_headers/cpuset.o 00:02:31.352 LINK sgl 00:02:31.352 CXX test/cpp_headers/crc16.o 00:02:31.352 CXX test/cpp_headers/crc32.o 00:02:31.352 CXX test/cpp_headers/crc64.o 00:02:31.352 LINK nvme_dp 00:02:31.352 CC test/nvme/err_injection/err_injection.o 00:02:31.352 LINK overhead 00:02:31.611 CC test/nvme/startup/startup.o 00:02:31.611 CC test/nvme/reserve/reserve.o 00:02:31.611 CXX test/cpp_headers/dif.o 00:02:31.611 CC test/nvme/simple_copy/simple_copy.o 00:02:31.611 CXX test/cpp_headers/dma.o 00:02:31.611 CC test/nvme/connect_stress/connect_stress.o 00:02:31.611 CC test/nvme/boot_partition/boot_partition.o 00:02:31.611 LINK err_injection 00:02:31.611 LINK startup 00:02:31.611 CC test/nvme/compliance/nvme_compliance.o 00:02:31.611 CXX test/cpp_headers/endian.o 00:02:31.611 LINK reserve 00:02:31.611 CXX test/cpp_headers/env_dpdk.o 00:02:31.870 LINK connect_stress 00:02:31.870 LINK simple_copy 00:02:31.870 LINK boot_partition 00:02:31.870 CC test/nvme/fused_ordering/fused_ordering.o 00:02:31.870 CXX test/cpp_headers/env.o 00:02:31.870 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:31.870 CC test/nvme/fdp/fdp.o 00:02:31.870 CXX test/cpp_headers/event.o 00:02:31.870 CC test/nvme/cuse/cuse.o 00:02:31.870 CXX test/cpp_headers/fd_group.o 00:02:31.870 CXX test/cpp_headers/fd.o 00:02:32.129 LINK nvme_compliance 00:02:32.129 CXX test/cpp_headers/file.o 00:02:32.129 LINK fused_ordering 00:02:32.129 LINK doorbell_aers 00:02:32.129 CXX test/cpp_headers/ftl.o 00:02:32.129 CXX test/cpp_headers/gpt_spec.o 00:02:32.129 CXX test/cpp_headers/hexlify.o 00:02:32.129 CXX test/cpp_headers/histogram_data.o 00:02:32.129 CXX test/cpp_headers/idxd.o 00:02:32.129 CXX test/cpp_headers/idxd_spec.o 00:02:32.129 LINK fdp 00:02:32.129 CXX test/cpp_headers/init.o 00:02:32.388 CXX test/cpp_headers/ioat.o 00:02:32.388 CXX test/cpp_headers/ioat_spec.o 00:02:32.388 CXX test/cpp_headers/iscsi_spec.o 00:02:32.388 CXX test/cpp_headers/json.o 00:02:32.388 CXX test/cpp_headers/jsonrpc.o 00:02:32.388 CXX test/cpp_headers/likely.o 00:02:32.388 CXX test/cpp_headers/log.o 00:02:32.388 CXX test/cpp_headers/lvol.o 00:02:32.388 CXX test/cpp_headers/memory.o 00:02:32.388 CXX test/cpp_headers/mmio.o 00:02:32.388 CXX test/cpp_headers/nbd.o 00:02:32.388 CXX test/cpp_headers/notify.o 00:02:32.388 CXX test/cpp_headers/nvme.o 00:02:32.647 CXX test/cpp_headers/nvme_intel.o 00:02:32.647 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.647 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.647 CXX test/cpp_headers/nvme_spec.o 00:02:32.647 CXX test/cpp_headers/nvme_zns.o 00:02:32.647 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.647 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.647 CXX test/cpp_headers/nvmf.o 00:02:32.647 CXX test/cpp_headers/nvmf_spec.o 00:02:32.648 CXX test/cpp_headers/nvmf_transport.o 00:02:32.648 CXX test/cpp_headers/opal.o 00:02:32.648 CXX test/cpp_headers/opal_spec.o 00:02:32.648 CXX test/cpp_headers/pci_ids.o 00:02:32.906 CXX test/cpp_headers/pipe.o 00:02:32.906 CXX test/cpp_headers/queue.o 00:02:32.906 CXX test/cpp_headers/reduce.o 00:02:32.906 CXX test/cpp_headers/rpc.o 00:02:32.906 CXX test/cpp_headers/scheduler.o 00:02:32.906 CXX test/cpp_headers/scsi.o 00:02:32.906 CXX test/cpp_headers/scsi_spec.o 00:02:32.906 CXX test/cpp_headers/sock.o 00:02:32.906 CXX test/cpp_headers/stdinc.o 00:02:32.906 CXX test/cpp_headers/string.o 00:02:32.906 LINK cuse 00:02:32.906 CXX test/cpp_headers/thread.o 00:02:33.165 CXX test/cpp_headers/trace.o 00:02:33.165 CXX test/cpp_headers/trace_parser.o 00:02:33.165 CXX test/cpp_headers/tree.o 00:02:33.165 CXX test/cpp_headers/ublk.o 00:02:33.165 CXX test/cpp_headers/util.o 00:02:33.165 CXX test/cpp_headers/uuid.o 00:02:33.165 CXX test/cpp_headers/version.o 00:02:33.165 CXX test/cpp_headers/vfio_user_pci.o 00:02:33.165 CXX test/cpp_headers/vfio_user_spec.o 00:02:33.165 CXX test/cpp_headers/vhost.o 00:02:33.165 CXX test/cpp_headers/vmd.o 00:02:33.165 CXX test/cpp_headers/xor.o 00:02:33.165 CXX test/cpp_headers/zipf.o 00:02:34.543 LINK esnap 00:02:34.803 00:02:34.803 real 0m58.258s 00:02:34.803 user 6m17.078s 00:02:34.803 sys 1m20.264s 00:02:34.803 07:30:00 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:34.803 07:30:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.803 ************************************ 00:02:34.803 END TEST make 00:02:34.803 ************************************ 00:02:35.096 07:30:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:35.096 07:30:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:35.096 07:30:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:35.096 07:30:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:35.096 07:30:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:35.096 07:30:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:35.096 07:30:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:35.096 07:30:00 -- scripts/common.sh@335 -- # IFS=.-: 00:02:35.096 07:30:00 -- scripts/common.sh@335 -- # read -ra ver1 00:02:35.096 07:30:00 -- scripts/common.sh@336 -- # IFS=.-: 00:02:35.096 07:30:00 -- scripts/common.sh@336 -- # read -ra ver2 00:02:35.096 07:30:00 -- scripts/common.sh@337 -- # local 'op=<' 00:02:35.096 07:30:00 -- scripts/common.sh@339 -- # ver1_l=2 00:02:35.096 07:30:00 -- scripts/common.sh@340 -- # ver2_l=1 00:02:35.096 07:30:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:35.096 07:30:00 -- scripts/common.sh@343 -- # case "$op" in 00:02:35.096 07:30:00 -- scripts/common.sh@344 -- # : 1 00:02:35.096 07:30:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:35.096 07:30:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:35.096 07:30:00 -- scripts/common.sh@364 -- # decimal 1 00:02:35.096 07:30:00 -- scripts/common.sh@352 -- # local d=1 00:02:35.096 07:30:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:35.096 07:30:00 -- scripts/common.sh@354 -- # echo 1 00:02:35.096 07:30:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:35.096 07:30:00 -- scripts/common.sh@365 -- # decimal 2 00:02:35.096 07:30:00 -- scripts/common.sh@352 -- # local d=2 00:02:35.096 07:30:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:35.096 07:30:00 -- scripts/common.sh@354 -- # echo 2 00:02:35.096 07:30:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:35.096 07:30:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:35.096 07:30:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:35.096 07:30:00 -- scripts/common.sh@367 -- # return 0 00:02:35.096 07:30:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:35.096 07:30:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:35.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.096 --rc genhtml_branch_coverage=1 00:02:35.096 --rc genhtml_function_coverage=1 00:02:35.096 --rc genhtml_legend=1 00:02:35.096 --rc geninfo_all_blocks=1 00:02:35.096 --rc geninfo_unexecuted_blocks=1 00:02:35.096 00:02:35.096 ' 00:02:35.096 07:30:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:35.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.096 --rc genhtml_branch_coverage=1 00:02:35.096 --rc genhtml_function_coverage=1 00:02:35.096 --rc genhtml_legend=1 00:02:35.096 --rc geninfo_all_blocks=1 00:02:35.096 --rc geninfo_unexecuted_blocks=1 00:02:35.096 00:02:35.096 ' 00:02:35.096 07:30:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:35.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.096 --rc genhtml_branch_coverage=1 00:02:35.096 --rc genhtml_function_coverage=1 00:02:35.096 --rc genhtml_legend=1 00:02:35.096 --rc geninfo_all_blocks=1 00:02:35.096 --rc geninfo_unexecuted_blocks=1 00:02:35.096 00:02:35.096 ' 00:02:35.096 07:30:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:35.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:35.096 --rc genhtml_branch_coverage=1 00:02:35.096 --rc genhtml_function_coverage=1 00:02:35.096 --rc genhtml_legend=1 00:02:35.096 --rc geninfo_all_blocks=1 00:02:35.096 --rc geninfo_unexecuted_blocks=1 00:02:35.096 00:02:35.096 ' 00:02:35.096 07:30:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:35.096 07:30:00 -- nvmf/common.sh@7 -- # uname -s 00:02:35.096 07:30:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.096 07:30:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.096 07:30:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.096 07:30:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.096 07:30:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.096 07:30:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.096 07:30:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.096 07:30:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.096 07:30:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.096 07:30:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.096 07:30:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:02:35.096 07:30:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:02:35.096 07:30:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.096 07:30:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.096 07:30:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:35.096 07:30:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.096 07:30:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.096 07:30:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.096 07:30:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.096 07:30:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.096 07:30:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.096 07:30:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.096 07:30:00 -- paths/export.sh@5 -- # export PATH 00:02:35.096 07:30:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.096 07:30:00 -- nvmf/common.sh@46 -- # : 0 00:02:35.096 07:30:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:35.096 07:30:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:35.096 07:30:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:35.096 07:30:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.096 07:30:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.096 07:30:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:35.096 07:30:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:35.096 07:30:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:35.096 07:30:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.096 07:30:00 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.096 07:30:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.096 07:30:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.096 07:30:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:35.096 07:30:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.096 07:30:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:35.096 07:30:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.096 07:30:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.096 07:30:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.096 07:30:00 -- spdk/autotest.sh@48 -- # udevadm_pid=48037 00:02:35.096 07:30:00 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:35.096 07:30:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.096 07:30:00 -- spdk/autotest.sh@54 -- # echo 48048 00:02:35.096 07:30:00 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:35.096 07:30:00 -- spdk/autotest.sh@56 -- # echo 48052 00:02:35.096 07:30:00 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:35.096 07:30:00 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:35.096 07:30:00 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:35.096 07:30:00 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:35.096 07:30:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:35.096 07:30:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.096 07:30:00 -- spdk/autotest.sh@70 -- # create_test_list 00:02:35.096 07:30:00 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:35.096 07:30:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.355 07:30:00 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:35.355 07:30:00 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:35.355 07:30:00 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:35.355 07:30:00 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.355 07:30:00 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:35.355 07:30:00 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:35.355 07:30:00 -- common/autotest_common.sh@1450 -- # uname 00:02:35.355 07:30:00 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:35.355 07:30:00 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:35.355 07:30:00 -- common/autotest_common.sh@1470 -- # uname 00:02:35.355 07:30:00 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:35.355 07:30:00 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:35.355 07:30:00 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:35.355 lcov: LCOV version 1.15 00:02:35.355 07:30:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:43.472 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:43.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:43.472 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:43.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:43.472 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:43.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:01.571 07:30:25 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:01.571 07:30:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:01.571 07:30:25 -- common/autotest_common.sh@10 -- # set +x 00:03:01.571 07:30:25 -- spdk/autotest.sh@89 -- # rm -f 00:03:01.571 07:30:25 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:01.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:01.571 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:01.571 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:01.571 07:30:26 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:01.571 07:30:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:01.571 07:30:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:01.571 07:30:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:01.571 07:30:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:01.571 07:30:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:01.571 07:30:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:01.571 07:30:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.571 07:30:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:01.571 07:30:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:01.571 07:30:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:01.571 07:30:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:01.571 07:30:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:01.571 07:30:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:01.571 07:30:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:01.571 07:30:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:01.571 07:30:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:01.571 07:30:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:01.571 07:30:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:01.571 07:30:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:01.571 07:30:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:01.571 07:30:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:01.571 07:30:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:01.571 07:30:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:01.571 07:30:26 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:01.571 07:30:26 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:03:01.571 07:30:26 -- spdk/autotest.sh@108 -- # grep -v p 00:03:01.571 07:30:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:01.571 07:30:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:01.571 07:30:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:01.571 07:30:26 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:01.571 07:30:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.571 No valid GPT data, bailing 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # pt= 00:03:01.571 07:30:26 -- scripts/common.sh@394 -- # return 1 00:03:01.571 07:30:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.571 1+0 records in 00:03:01.571 1+0 records out 00:03:01.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437812 s, 240 MB/s 00:03:01.571 07:30:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:01.571 07:30:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:01.571 07:30:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:01.571 07:30:26 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:01.571 07:30:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:01.571 No valid GPT data, bailing 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # pt= 00:03:01.571 07:30:26 -- scripts/common.sh@394 -- # return 1 00:03:01.571 07:30:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:01.571 1+0 records in 00:03:01.571 1+0 records out 00:03:01.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414075 s, 253 MB/s 00:03:01.571 07:30:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:01.571 07:30:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:01.571 07:30:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:03:01.571 07:30:26 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:01.571 07:30:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:01.571 No valid GPT data, bailing 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # pt= 00:03:01.571 07:30:26 -- scripts/common.sh@394 -- # return 1 00:03:01.571 07:30:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:01.571 1+0 records in 00:03:01.571 1+0 records out 00:03:01.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502449 s, 209 MB/s 00:03:01.571 07:30:26 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:01.571 07:30:26 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:01.571 07:30:26 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:03:01.571 07:30:26 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:01.571 07:30:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:01.571 No valid GPT data, bailing 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:01.571 07:30:26 -- scripts/common.sh@393 -- # pt= 00:03:01.571 07:30:26 -- scripts/common.sh@394 -- # return 1 00:03:01.572 07:30:26 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:01.572 1+0 records in 00:03:01.572 1+0 records out 00:03:01.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044729 s, 234 MB/s 00:03:01.572 07:30:26 -- spdk/autotest.sh@116 -- # sync 00:03:01.572 07:30:27 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.572 07:30:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.572 07:30:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:03.477 07:30:29 -- spdk/autotest.sh@122 -- # uname -s 00:03:03.477 07:30:29 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:03.477 07:30:29 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:03.477 07:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:03.477 07:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:03.477 07:30:29 -- common/autotest_common.sh@10 -- # set +x 00:03:03.477 ************************************ 00:03:03.477 START TEST setup.sh 00:03:03.477 ************************************ 00:03:03.477 07:30:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:03.737 * Looking for test storage... 00:03:03.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:03.737 07:30:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:03.737 07:30:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:03.737 07:30:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:03.737 07:30:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:03.737 07:30:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:03.737 07:30:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:03.737 07:30:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:03.737 07:30:29 -- scripts/common.sh@335 -- # IFS=.-: 00:03:03.737 07:30:29 -- scripts/common.sh@335 -- # read -ra ver1 00:03:03.737 07:30:29 -- scripts/common.sh@336 -- # IFS=.-: 00:03:03.737 07:30:29 -- scripts/common.sh@336 -- # read -ra ver2 00:03:03.737 07:30:29 -- scripts/common.sh@337 -- # local 'op=<' 00:03:03.737 07:30:29 -- scripts/common.sh@339 -- # ver1_l=2 00:03:03.737 07:30:29 -- scripts/common.sh@340 -- # ver2_l=1 00:03:03.737 07:30:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:03.737 07:30:29 -- scripts/common.sh@343 -- # case "$op" in 00:03:03.737 07:30:29 -- scripts/common.sh@344 -- # : 1 00:03:03.737 07:30:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:03.737 07:30:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.737 07:30:29 -- scripts/common.sh@364 -- # decimal 1 00:03:03.737 07:30:29 -- scripts/common.sh@352 -- # local d=1 00:03:03.737 07:30:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:03.737 07:30:29 -- scripts/common.sh@354 -- # echo 1 00:03:03.737 07:30:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:03.737 07:30:29 -- scripts/common.sh@365 -- # decimal 2 00:03:03.737 07:30:29 -- scripts/common.sh@352 -- # local d=2 00:03:03.737 07:30:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:03.737 07:30:29 -- scripts/common.sh@354 -- # echo 2 00:03:03.737 07:30:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:03.737 07:30:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:03.737 07:30:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:03.737 07:30:29 -- scripts/common.sh@367 -- # return 0 00:03:03.737 07:30:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:03.737 07:30:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:03.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.737 --rc genhtml_branch_coverage=1 00:03:03.737 --rc genhtml_function_coverage=1 00:03:03.737 --rc genhtml_legend=1 00:03:03.737 --rc geninfo_all_blocks=1 00:03:03.737 --rc geninfo_unexecuted_blocks=1 00:03:03.737 00:03:03.737 ' 00:03:03.737 07:30:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:03.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.737 --rc genhtml_branch_coverage=1 00:03:03.737 --rc genhtml_function_coverage=1 00:03:03.737 --rc genhtml_legend=1 00:03:03.737 --rc geninfo_all_blocks=1 00:03:03.737 --rc geninfo_unexecuted_blocks=1 00:03:03.737 00:03:03.737 ' 00:03:03.737 07:30:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:03.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.737 --rc genhtml_branch_coverage=1 00:03:03.737 --rc genhtml_function_coverage=1 00:03:03.737 --rc genhtml_legend=1 00:03:03.737 --rc geninfo_all_blocks=1 00:03:03.737 --rc geninfo_unexecuted_blocks=1 00:03:03.737 00:03:03.737 ' 00:03:03.737 07:30:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:03.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.737 --rc genhtml_branch_coverage=1 00:03:03.737 --rc genhtml_function_coverage=1 00:03:03.737 --rc genhtml_legend=1 00:03:03.737 --rc geninfo_all_blocks=1 00:03:03.737 --rc geninfo_unexecuted_blocks=1 00:03:03.737 00:03:03.737 ' 00:03:03.737 07:30:29 -- setup/test-setup.sh@10 -- # uname -s 00:03:03.737 07:30:29 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:03.737 07:30:29 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:03.737 07:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:03.737 07:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:03.737 07:30:29 -- common/autotest_common.sh@10 -- # set +x 00:03:03.737 ************************************ 00:03:03.737 START TEST acl 00:03:03.737 ************************************ 00:03:03.737 07:30:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:03.737 * Looking for test storage... 00:03:03.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:03.737 07:30:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:03.737 07:30:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:03.737 07:30:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:03.996 07:30:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:03.996 07:30:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:03.996 07:30:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:03.996 07:30:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:03.996 07:30:29 -- scripts/common.sh@335 -- # IFS=.-: 00:03:03.996 07:30:29 -- scripts/common.sh@335 -- # read -ra ver1 00:03:03.996 07:30:29 -- scripts/common.sh@336 -- # IFS=.-: 00:03:03.996 07:30:29 -- scripts/common.sh@336 -- # read -ra ver2 00:03:03.996 07:30:29 -- scripts/common.sh@337 -- # local 'op=<' 00:03:03.996 07:30:29 -- scripts/common.sh@339 -- # ver1_l=2 00:03:03.996 07:30:29 -- scripts/common.sh@340 -- # ver2_l=1 00:03:03.996 07:30:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:03.996 07:30:29 -- scripts/common.sh@343 -- # case "$op" in 00:03:03.996 07:30:29 -- scripts/common.sh@344 -- # : 1 00:03:03.996 07:30:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:03.996 07:30:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.996 07:30:29 -- scripts/common.sh@364 -- # decimal 1 00:03:03.996 07:30:29 -- scripts/common.sh@352 -- # local d=1 00:03:03.996 07:30:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:03.996 07:30:29 -- scripts/common.sh@354 -- # echo 1 00:03:03.996 07:30:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:03.996 07:30:29 -- scripts/common.sh@365 -- # decimal 2 00:03:03.996 07:30:29 -- scripts/common.sh@352 -- # local d=2 00:03:03.996 07:30:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:03.996 07:30:29 -- scripts/common.sh@354 -- # echo 2 00:03:03.996 07:30:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:03.996 07:30:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:03.996 07:30:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:03.996 07:30:29 -- scripts/common.sh@367 -- # return 0 00:03:03.996 07:30:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:03.996 07:30:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:03.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.996 --rc genhtml_branch_coverage=1 00:03:03.996 --rc genhtml_function_coverage=1 00:03:03.996 --rc genhtml_legend=1 00:03:03.996 --rc geninfo_all_blocks=1 00:03:03.996 --rc geninfo_unexecuted_blocks=1 00:03:03.996 00:03:03.996 ' 00:03:03.996 07:30:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:03.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.996 --rc genhtml_branch_coverage=1 00:03:03.996 --rc genhtml_function_coverage=1 00:03:03.996 --rc genhtml_legend=1 00:03:03.996 --rc geninfo_all_blocks=1 00:03:03.996 --rc geninfo_unexecuted_blocks=1 00:03:03.996 00:03:03.996 ' 00:03:03.996 07:30:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:03.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.996 --rc genhtml_branch_coverage=1 00:03:03.996 --rc genhtml_function_coverage=1 00:03:03.996 --rc genhtml_legend=1 00:03:03.996 --rc geninfo_all_blocks=1 00:03:03.996 --rc geninfo_unexecuted_blocks=1 00:03:03.996 00:03:03.996 ' 00:03:03.996 07:30:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:03.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:03.996 --rc genhtml_branch_coverage=1 00:03:03.996 --rc genhtml_function_coverage=1 00:03:03.996 --rc genhtml_legend=1 00:03:03.996 --rc geninfo_all_blocks=1 00:03:03.996 --rc geninfo_unexecuted_blocks=1 00:03:03.996 00:03:03.996 ' 00:03:03.996 07:30:29 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:03.996 07:30:29 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:03.996 07:30:29 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:03.996 07:30:29 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:03.996 07:30:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:03.996 07:30:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:03.996 07:30:29 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:03.996 07:30:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.996 07:30:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:03.996 07:30:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:03.996 07:30:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:03.996 07:30:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:03.996 07:30:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:03.996 07:30:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:03.996 07:30:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:03.996 07:30:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:03.996 07:30:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:03.996 07:30:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:03.997 07:30:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:03.997 07:30:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:03.997 07:30:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:03.997 07:30:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:03.997 07:30:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:03.997 07:30:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:03.997 07:30:29 -- setup/acl.sh@12 -- # devs=() 00:03:03.997 07:30:29 -- setup/acl.sh@12 -- # declare -a devs 00:03:03.997 07:30:29 -- setup/acl.sh@13 -- # drivers=() 00:03:03.997 07:30:29 -- setup/acl.sh@13 -- # declare -A drivers 00:03:03.997 07:30:29 -- setup/acl.sh@51 -- # setup reset 00:03:03.997 07:30:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.997 07:30:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:04.564 07:30:30 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.564 07:30:30 -- setup/acl.sh@16 -- # local dev driver 00:03:04.564 07:30:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.564 07:30:30 -- setup/acl.sh@15 -- # setup output status 00:03:04.564 07:30:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.564 07:30:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:04.822 Hugepages 00:03:04.823 node hugesize free / total 00:03:04.823 07:30:30 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.823 07:30:30 -- setup/acl.sh@19 -- # continue 00:03:04.823 07:30:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.823 00:03:04.823 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.823 07:30:30 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.823 07:30:30 -- setup/acl.sh@19 -- # continue 00:03:04.823 07:30:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.823 07:30:30 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:04.823 07:30:30 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:04.823 07:30:30 -- setup/acl.sh@20 -- # continue 00:03:04.823 07:30:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.823 07:30:30 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:04.823 07:30:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:04.823 07:30:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:04.823 07:30:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:04.823 07:30:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:04.823 07:30:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.081 07:30:30 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:05.081 07:30:30 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:05.081 07:30:30 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:05.081 07:30:30 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:05.081 07:30:30 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:05.081 07:30:30 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.081 07:30:30 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:05.081 07:30:30 -- setup/acl.sh@54 -- # run_test denied denied 00:03:05.081 07:30:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.081 07:30:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.081 07:30:30 -- common/autotest_common.sh@10 -- # set +x 00:03:05.081 ************************************ 00:03:05.081 START TEST denied 00:03:05.081 ************************************ 00:03:05.081 07:30:30 -- common/autotest_common.sh@1114 -- # denied 00:03:05.081 07:30:30 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:05.081 07:30:30 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:05.081 07:30:30 -- setup/acl.sh@38 -- # setup output config 00:03:05.081 07:30:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.081 07:30:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:06.018 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:06.018 07:30:31 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:06.018 07:30:31 -- setup/acl.sh@28 -- # local dev driver 00:03:06.018 07:30:31 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:06.018 07:30:31 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:06.018 07:30:31 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:06.018 07:30:31 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:06.018 07:30:31 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:06.018 07:30:31 -- setup/acl.sh@41 -- # setup reset 00:03:06.018 07:30:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.018 07:30:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:06.587 00:03:06.587 real 0m1.492s 00:03:06.587 user 0m0.622s 00:03:06.587 sys 0m0.837s 00:03:06.587 07:30:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:06.587 07:30:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.587 ************************************ 00:03:06.587 END TEST denied 00:03:06.587 ************************************ 00:03:06.587 07:30:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:06.587 07:30:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.587 07:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.587 07:30:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.587 ************************************ 00:03:06.587 START TEST allowed 00:03:06.587 ************************************ 00:03:06.587 07:30:32 -- common/autotest_common.sh@1114 -- # allowed 00:03:06.587 07:30:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:06.587 07:30:32 -- setup/acl.sh@45 -- # setup output config 00:03:06.587 07:30:32 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:06.587 07:30:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.587 07:30:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:07.522 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:07.523 07:30:32 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:03:07.523 07:30:32 -- setup/acl.sh@28 -- # local dev driver 00:03:07.523 07:30:32 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.523 07:30:32 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:07.523 07:30:32 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:07.523 07:30:32 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.523 07:30:32 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.523 07:30:32 -- setup/acl.sh@48 -- # setup reset 00:03:07.523 07:30:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.523 07:30:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:08.101 00:03:08.101 real 0m1.561s 00:03:08.101 user 0m0.692s 00:03:08.101 sys 0m0.878s 00:03:08.101 07:30:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:08.101 07:30:33 -- common/autotest_common.sh@10 -- # set +x 00:03:08.101 ************************************ 00:03:08.101 END TEST allowed 00:03:08.101 ************************************ 00:03:08.101 ************************************ 00:03:08.101 END TEST acl 00:03:08.101 ************************************ 00:03:08.101 00:03:08.101 real 0m4.432s 00:03:08.101 user 0m1.945s 00:03:08.101 sys 0m2.494s 00:03:08.101 07:30:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:08.101 07:30:33 -- common/autotest_common.sh@10 -- # set +x 00:03:08.101 07:30:33 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:08.101 07:30:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.101 07:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.101 07:30:33 -- common/autotest_common.sh@10 -- # set +x 00:03:08.101 ************************************ 00:03:08.101 START TEST hugepages 00:03:08.101 ************************************ 00:03:08.101 07:30:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:08.390 * Looking for test storage... 00:03:08.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:08.390 07:30:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:08.390 07:30:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:08.390 07:30:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:08.390 07:30:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:08.390 07:30:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:08.390 07:30:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:08.390 07:30:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:08.390 07:30:33 -- scripts/common.sh@335 -- # IFS=.-: 00:03:08.390 07:30:33 -- scripts/common.sh@335 -- # read -ra ver1 00:03:08.390 07:30:33 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.390 07:30:33 -- scripts/common.sh@336 -- # read -ra ver2 00:03:08.390 07:30:33 -- scripts/common.sh@337 -- # local 'op=<' 00:03:08.390 07:30:33 -- scripts/common.sh@339 -- # ver1_l=2 00:03:08.390 07:30:33 -- scripts/common.sh@340 -- # ver2_l=1 00:03:08.390 07:30:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:08.390 07:30:33 -- scripts/common.sh@343 -- # case "$op" in 00:03:08.390 07:30:33 -- scripts/common.sh@344 -- # : 1 00:03:08.390 07:30:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:08.390 07:30:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.390 07:30:33 -- scripts/common.sh@364 -- # decimal 1 00:03:08.390 07:30:33 -- scripts/common.sh@352 -- # local d=1 00:03:08.390 07:30:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.390 07:30:33 -- scripts/common.sh@354 -- # echo 1 00:03:08.390 07:30:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:08.390 07:30:33 -- scripts/common.sh@365 -- # decimal 2 00:03:08.390 07:30:33 -- scripts/common.sh@352 -- # local d=2 00:03:08.390 07:30:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.390 07:30:33 -- scripts/common.sh@354 -- # echo 2 00:03:08.390 07:30:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:08.390 07:30:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:08.390 07:30:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:08.390 07:30:33 -- scripts/common.sh@367 -- # return 0 00:03:08.390 07:30:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.390 07:30:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.390 --rc genhtml_branch_coverage=1 00:03:08.390 --rc genhtml_function_coverage=1 00:03:08.390 --rc genhtml_legend=1 00:03:08.390 --rc geninfo_all_blocks=1 00:03:08.390 --rc geninfo_unexecuted_blocks=1 00:03:08.390 00:03:08.390 ' 00:03:08.390 07:30:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.390 --rc genhtml_branch_coverage=1 00:03:08.390 --rc genhtml_function_coverage=1 00:03:08.390 --rc genhtml_legend=1 00:03:08.390 --rc geninfo_all_blocks=1 00:03:08.390 --rc geninfo_unexecuted_blocks=1 00:03:08.390 00:03:08.390 ' 00:03:08.390 07:30:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.390 --rc genhtml_branch_coverage=1 00:03:08.390 --rc genhtml_function_coverage=1 00:03:08.390 --rc genhtml_legend=1 00:03:08.390 --rc geninfo_all_blocks=1 00:03:08.390 --rc geninfo_unexecuted_blocks=1 00:03:08.390 00:03:08.390 ' 00:03:08.390 07:30:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:08.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.390 --rc genhtml_branch_coverage=1 00:03:08.391 --rc genhtml_function_coverage=1 00:03:08.391 --rc genhtml_legend=1 00:03:08.391 --rc geninfo_all_blocks=1 00:03:08.391 --rc geninfo_unexecuted_blocks=1 00:03:08.391 00:03:08.391 ' 00:03:08.391 07:30:33 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:08.391 07:30:33 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:08.391 07:30:33 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:08.391 07:30:33 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:08.391 07:30:33 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:08.391 07:30:33 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:08.391 07:30:33 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:08.391 07:30:33 -- setup/common.sh@18 -- # local node= 00:03:08.391 07:30:33 -- setup/common.sh@19 -- # local var val 00:03:08.391 07:30:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.391 07:30:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.391 07:30:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.391 07:30:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.391 07:30:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.391 07:30:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 5995584 kB' 'MemAvailable: 7376624 kB' 'Buffers: 2684 kB' 'Cached: 1594780 kB' 'SwapCached: 0 kB' 'Active: 455352 kB' 'Inactive: 1259124 kB' 'Active(anon): 127520 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259124 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 118596 kB' 'Mapped: 51096 kB' 'Shmem: 10508 kB' 'KReclaimable: 62472 kB' 'Slab: 154592 kB' 'SReclaimable: 62472 kB' 'SUnreclaim: 92120 kB' 'KernelStack: 6496 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 320620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.391 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.391 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # continue 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.392 07:30:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.392 07:30:33 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.392 07:30:33 -- setup/common.sh@33 -- # echo 2048 00:03:08.392 07:30:33 -- setup/common.sh@33 -- # return 0 00:03:08.392 07:30:33 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:08.392 07:30:33 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:08.392 07:30:33 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:08.392 07:30:33 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:08.392 07:30:33 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:08.392 07:30:33 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:08.392 07:30:33 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:08.392 07:30:33 -- setup/hugepages.sh@207 -- # get_nodes 00:03:08.392 07:30:33 -- setup/hugepages.sh@27 -- # local node 00:03:08.392 07:30:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.392 07:30:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:08.392 07:30:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:08.392 07:30:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.392 07:30:33 -- setup/hugepages.sh@208 -- # clear_hp 00:03:08.392 07:30:33 -- setup/hugepages.sh@37 -- # local node hp 00:03:08.392 07:30:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.392 07:30:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.392 07:30:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.392 07:30:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.392 07:30:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.392 07:30:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:08.392 07:30:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:08.392 07:30:33 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:08.392 07:30:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.392 07:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.392 07:30:33 -- common/autotest_common.sh@10 -- # set +x 00:03:08.392 ************************************ 00:03:08.392 START TEST default_setup 00:03:08.392 ************************************ 00:03:08.392 07:30:33 -- common/autotest_common.sh@1114 -- # default_setup 00:03:08.392 07:30:33 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:08.392 07:30:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.392 07:30:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:08.392 07:30:33 -- setup/hugepages.sh@51 -- # shift 00:03:08.392 07:30:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:08.392 07:30:33 -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.392 07:30:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.392 07:30:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.392 07:30:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:08.392 07:30:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:08.392 07:30:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.392 07:30:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.392 07:30:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:08.392 07:30:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.392 07:30:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.392 07:30:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:08.392 07:30:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.392 07:30:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:08.392 07:30:33 -- setup/hugepages.sh@73 -- # return 0 00:03:08.392 07:30:33 -- setup/hugepages.sh@137 -- # setup output 00:03:08.392 07:30:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.392 07:30:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:09.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:09.338 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:09.338 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:09.338 07:30:34 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:09.338 07:30:34 -- setup/hugepages.sh@89 -- # local node 00:03:09.338 07:30:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.338 07:30:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.338 07:30:34 -- setup/hugepages.sh@92 -- # local surp 00:03:09.338 07:30:34 -- setup/hugepages.sh@93 -- # local resv 00:03:09.338 07:30:34 -- setup/hugepages.sh@94 -- # local anon 00:03:09.338 07:30:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.338 07:30:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.338 07:30:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.338 07:30:34 -- setup/common.sh@18 -- # local node= 00:03:09.338 07:30:34 -- setup/common.sh@19 -- # local var val 00:03:09.338 07:30:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.338 07:30:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.338 07:30:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.338 07:30:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.338 07:30:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.338 07:30:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.338 07:30:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8104844 kB' 'MemAvailable: 9485680 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 457084 kB' 'Inactive: 1259128 kB' 'Active(anon): 129252 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259128 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 50944 kB' 'Shmem: 10488 kB' 'KReclaimable: 62052 kB' 'Slab: 154208 kB' 'SReclaimable: 62052 kB' 'SUnreclaim: 92156 kB' 'KernelStack: 6480 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.338 07:30:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.338 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.338 07:30:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.338 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.338 07:30:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.338 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.338 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.338 07:30:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.339 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.339 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.340 07:30:34 -- setup/common.sh@33 -- # echo 0 00:03:09.340 07:30:34 -- setup/common.sh@33 -- # return 0 00:03:09.340 07:30:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:09.340 07:30:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.340 07:30:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.340 07:30:34 -- setup/common.sh@18 -- # local node= 00:03:09.340 07:30:34 -- setup/common.sh@19 -- # local var val 00:03:09.340 07:30:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.340 07:30:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.340 07:30:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.340 07:30:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.340 07:30:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.340 07:30:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105112 kB' 'MemAvailable: 9485956 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456640 kB' 'Inactive: 1259136 kB' 'Active(anon): 128808 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62052 kB' 'Slab: 154188 kB' 'SReclaimable: 62052 kB' 'SUnreclaim: 92136 kB' 'KernelStack: 6464 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.340 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.340 07:30:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.341 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.341 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.342 07:30:34 -- setup/common.sh@33 -- # echo 0 00:03:09.342 07:30:34 -- setup/common.sh@33 -- # return 0 00:03:09.342 07:30:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:09.342 07:30:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.342 07:30:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.342 07:30:34 -- setup/common.sh@18 -- # local node= 00:03:09.342 07:30:34 -- setup/common.sh@19 -- # local var val 00:03:09.342 07:30:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.342 07:30:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.342 07:30:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.342 07:30:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.342 07:30:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.342 07:30:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105120 kB' 'MemAvailable: 9485964 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456344 kB' 'Inactive: 1259136 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62052 kB' 'Slab: 154180 kB' 'SReclaimable: 62052 kB' 'SUnreclaim: 92128 kB' 'KernelStack: 6448 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.342 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.342 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.343 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.343 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.344 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.344 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.344 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.344 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.344 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.344 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.344 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.344 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.344 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.344 07:30:34 -- setup/common.sh@33 -- # echo 0 00:03:09.344 07:30:34 -- setup/common.sh@33 -- # return 0 00:03:09.344 07:30:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:09.344 07:30:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.344 nr_hugepages=1024 00:03:09.344 resv_hugepages=0 00:03:09.344 surplus_hugepages=0 00:03:09.344 anon_hugepages=0 00:03:09.344 07:30:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.344 07:30:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.344 07:30:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.344 07:30:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.344 07:30:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.344 07:30:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.344 07:30:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.344 07:30:34 -- setup/common.sh@18 -- # local node= 00:03:09.344 07:30:34 -- setup/common.sh@19 -- # local var val 00:03:09.344 07:30:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.344 07:30:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.344 07:30:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.344 07:30:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.344 07:30:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.344 07:30:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.344 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.344 07:30:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105404 kB' 'MemAvailable: 9486248 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456580 kB' 'Inactive: 1259136 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119860 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62052 kB' 'Slab: 154176 kB' 'SReclaimable: 62052 kB' 'SUnreclaim: 92124 kB' 'KernelStack: 6448 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:09.344 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.610 07:30:34 -- setup/common.sh@32 -- # continue 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 07:30:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.611 07:30:34 -- setup/common.sh@33 -- # echo 1024 00:03:09.611 07:30:34 -- setup/common.sh@33 -- # return 0 00:03:09.611 07:30:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.611 07:30:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.611 07:30:34 -- setup/hugepages.sh@27 -- # local node 00:03:09.611 07:30:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.611 07:30:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.611 07:30:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:09.611 07:30:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.611 07:30:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.611 07:30:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.611 07:30:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.611 07:30:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.611 07:30:34 -- setup/common.sh@18 -- # local node=0 00:03:09.611 07:30:34 -- setup/common.sh@19 -- # local var val 00:03:09.611 07:30:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.611 07:30:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.611 07:30:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.611 07:30:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.611 07:30:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.611 07:30:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.611 07:30:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105408 kB' 'MemUsed: 4133696 kB' 'SwapCached: 0 kB' 'Active: 456692 kB' 'Inactive: 1259136 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1597456 kB' 'Mapped: 50856 kB' 'AnonPages: 119940 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62052 kB' 'Slab: 154172 kB' 'SReclaimable: 62052 kB' 'SUnreclaim: 92120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # continue 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 07:30:35 -- setup/common.sh@33 -- # echo 0 00:03:09.611 07:30:35 -- setup/common.sh@33 -- # return 0 00:03:09.611 node0=1024 expecting 1024 00:03:09.611 ************************************ 00:03:09.611 END TEST default_setup 00:03:09.611 ************************************ 00:03:09.611 07:30:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.612 07:30:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.612 07:30:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.612 07:30:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.612 07:30:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:09.612 07:30:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:09.612 00:03:09.612 real 0m1.061s 00:03:09.612 user 0m0.470s 00:03:09.612 sys 0m0.474s 00:03:09.612 07:30:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:09.612 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:03:09.612 07:30:35 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:09.612 07:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.612 07:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.612 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:03:09.612 ************************************ 00:03:09.612 START TEST per_node_1G_alloc 00:03:09.612 ************************************ 00:03:09.612 07:30:35 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:09.612 07:30:35 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:09.612 07:30:35 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:09.612 07:30:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:09.612 07:30:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.612 07:30:35 -- setup/hugepages.sh@51 -- # shift 00:03:09.612 07:30:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.612 07:30:35 -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.612 07:30:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.612 07:30:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:09.612 07:30:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.612 07:30:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.612 07:30:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.612 07:30:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:09.612 07:30:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:09.612 07:30:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.612 07:30:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.612 07:30:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.612 07:30:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.612 07:30:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.612 07:30:35 -- setup/hugepages.sh@73 -- # return 0 00:03:09.612 07:30:35 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:09.612 07:30:35 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:09.612 07:30:35 -- setup/hugepages.sh@146 -- # setup output 00:03:09.612 07:30:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.612 07:30:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:09.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:09.873 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:09.873 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:09.873 07:30:35 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:09.873 07:30:35 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:09.873 07:30:35 -- setup/hugepages.sh@89 -- # local node 00:03:09.873 07:30:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.873 07:30:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.873 07:30:35 -- setup/hugepages.sh@92 -- # local surp 00:03:09.873 07:30:35 -- setup/hugepages.sh@93 -- # local resv 00:03:09.873 07:30:35 -- setup/hugepages.sh@94 -- # local anon 00:03:09.873 07:30:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.873 07:30:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.873 07:30:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.873 07:30:35 -- setup/common.sh@18 -- # local node= 00:03:09.873 07:30:35 -- setup/common.sh@19 -- # local var val 00:03:09.873 07:30:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.873 07:30:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.873 07:30:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.873 07:30:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.873 07:30:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.873 07:30:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.873 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.873 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.873 07:30:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9154032 kB' 'MemAvailable: 10534872 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456864 kB' 'Inactive: 1259140 kB' 'Active(anon): 129032 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120144 kB' 'Mapped: 51036 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154152 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92116 kB' 'KernelStack: 6440 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.134 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.134 07:30:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.135 07:30:35 -- setup/common.sh@33 -- # echo 0 00:03:10.135 07:30:35 -- setup/common.sh@33 -- # return 0 00:03:10.135 07:30:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.135 07:30:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.135 07:30:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.135 07:30:35 -- setup/common.sh@18 -- # local node= 00:03:10.135 07:30:35 -- setup/common.sh@19 -- # local var val 00:03:10.135 07:30:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.135 07:30:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.135 07:30:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.135 07:30:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.135 07:30:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.135 07:30:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9154032 kB' 'MemAvailable: 10534872 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456708 kB' 'Inactive: 1259140 kB' 'Active(anon): 128876 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119928 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154148 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92112 kB' 'KernelStack: 6448 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.135 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.135 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.136 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.136 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.136 07:30:35 -- setup/common.sh@33 -- # echo 0 00:03:10.136 07:30:35 -- setup/common.sh@33 -- # return 0 00:03:10.136 07:30:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.136 07:30:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.136 07:30:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.136 07:30:35 -- setup/common.sh@18 -- # local node= 00:03:10.137 07:30:35 -- setup/common.sh@19 -- # local var val 00:03:10.137 07:30:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.137 07:30:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.137 07:30:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.137 07:30:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.137 07:30:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.137 07:30:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9154032 kB' 'MemAvailable: 10534872 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456684 kB' 'Inactive: 1259140 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119940 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154148 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92112 kB' 'KernelStack: 6448 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.137 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.137 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.138 07:30:35 -- setup/common.sh@33 -- # echo 0 00:03:10.138 07:30:35 -- setup/common.sh@33 -- # return 0 00:03:10.138 nr_hugepages=512 00:03:10.138 resv_hugepages=0 00:03:10.138 surplus_hugepages=0 00:03:10.138 07:30:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.138 07:30:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:10.138 07:30:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.138 07:30:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.138 anon_hugepages=0 00:03:10.138 07:30:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.138 07:30:35 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:10.138 07:30:35 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:10.138 07:30:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.138 07:30:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.138 07:30:35 -- setup/common.sh@18 -- # local node= 00:03:10.138 07:30:35 -- setup/common.sh@19 -- # local var val 00:03:10.138 07:30:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.138 07:30:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.138 07:30:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.138 07:30:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.138 07:30:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.138 07:30:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9154032 kB' 'MemAvailable: 10534872 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 1259140 kB' 'Active(anon): 128824 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119964 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154148 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92112 kB' 'KernelStack: 6464 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.138 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.138 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.139 07:30:35 -- setup/common.sh@33 -- # echo 512 00:03:10.139 07:30:35 -- setup/common.sh@33 -- # return 0 00:03:10.139 07:30:35 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:10.139 07:30:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.139 07:30:35 -- setup/hugepages.sh@27 -- # local node 00:03:10.139 07:30:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.139 07:30:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.139 07:30:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:10.139 07:30:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.139 07:30:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.139 07:30:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.139 07:30:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.139 07:30:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.139 07:30:35 -- setup/common.sh@18 -- # local node=0 00:03:10.139 07:30:35 -- setup/common.sh@19 -- # local var val 00:03:10.139 07:30:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.139 07:30:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.139 07:30:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.139 07:30:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.139 07:30:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.139 07:30:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9154032 kB' 'MemUsed: 3085072 kB' 'SwapCached: 0 kB' 'Active: 456608 kB' 'Inactive: 1259140 kB' 'Active(anon): 128776 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1597456 kB' 'Mapped: 50856 kB' 'AnonPages: 119860 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62036 kB' 'Slab: 154148 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.139 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.139 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # continue 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.140 07:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.140 07:30:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.140 07:30:35 -- setup/common.sh@33 -- # echo 0 00:03:10.140 07:30:35 -- setup/common.sh@33 -- # return 0 00:03:10.140 07:30:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.140 07:30:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.140 07:30:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.140 node0=512 expecting 512 00:03:10.140 07:30:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.140 07:30:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.140 07:30:35 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.140 00:03:10.140 real 0m0.581s 00:03:10.140 user 0m0.270s 00:03:10.140 sys 0m0.317s 00:03:10.140 07:30:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:10.140 ************************************ 00:03:10.140 END TEST per_node_1G_alloc 00:03:10.140 ************************************ 00:03:10.140 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:03:10.140 07:30:35 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:10.140 07:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.140 07:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.140 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:03:10.140 ************************************ 00:03:10.140 START TEST even_2G_alloc 00:03:10.140 ************************************ 00:03:10.140 07:30:35 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:10.140 07:30:35 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:10.140 07:30:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.140 07:30:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.140 07:30:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.140 07:30:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.140 07:30:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.140 07:30:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.140 07:30:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.140 07:30:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.140 07:30:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:10.140 07:30:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.140 07:30:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.140 07:30:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.140 07:30:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.140 07:30:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.140 07:30:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:10.140 07:30:35 -- setup/hugepages.sh@83 -- # : 0 00:03:10.140 07:30:35 -- setup/hugepages.sh@84 -- # : 0 00:03:10.140 07:30:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.140 07:30:35 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:10.140 07:30:35 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:10.140 07:30:35 -- setup/hugepages.sh@153 -- # setup output 00:03:10.140 07:30:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.140 07:30:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:10.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:10.711 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.711 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.711 07:30:36 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:10.711 07:30:36 -- setup/hugepages.sh@89 -- # local node 00:03:10.711 07:30:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.711 07:30:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.711 07:30:36 -- setup/hugepages.sh@92 -- # local surp 00:03:10.711 07:30:36 -- setup/hugepages.sh@93 -- # local resv 00:03:10.711 07:30:36 -- setup/hugepages.sh@94 -- # local anon 00:03:10.711 07:30:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.711 07:30:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.711 07:30:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.711 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:10.711 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:10.711 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.711 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.711 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.711 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.711 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.711 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8109848 kB' 'MemAvailable: 9490688 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456912 kB' 'Inactive: 1259140 kB' 'Active(anon): 129080 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120184 kB' 'Mapped: 50988 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154156 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92120 kB' 'KernelStack: 6456 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.711 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.711 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.712 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:10.712 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:10.712 07:30:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.712 07:30:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.712 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.712 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:10.712 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:10.712 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.712 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.712 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.712 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.712 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.712 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8109596 kB' 'MemAvailable: 9490436 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 1259140 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119880 kB' 'Mapped: 50880 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154164 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92128 kB' 'KernelStack: 6448 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.712 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.712 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.713 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.713 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.714 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:10.714 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:10.714 07:30:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.714 07:30:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.714 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.714 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:10.714 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:10.714 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.714 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.714 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.714 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.714 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.714 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8110312 kB' 'MemAvailable: 9491152 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456688 kB' 'Inactive: 1259140 kB' 'Active(anon): 128856 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119960 kB' 'Mapped: 50880 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154164 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92128 kB' 'KernelStack: 6448 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.714 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.714 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.715 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.715 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.716 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:10.716 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:10.716 07:30:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.716 07:30:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.716 nr_hugepages=1024 00:03:10.716 07:30:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.716 resv_hugepages=0 00:03:10.716 surplus_hugepages=0 00:03:10.716 anon_hugepages=0 00:03:10.716 07:30:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.716 07:30:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.716 07:30:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.716 07:30:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.716 07:30:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.716 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.716 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:10.716 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:10.716 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.716 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.716 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.716 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.716 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.716 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8110312 kB' 'MemAvailable: 9491152 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456604 kB' 'Inactive: 1259140 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119860 kB' 'Mapped: 50880 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154160 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92124 kB' 'KernelStack: 6448 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.716 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.716 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.717 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.717 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.717 07:30:36 -- setup/common.sh@33 -- # echo 1024 00:03:10.718 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:10.718 07:30:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.718 07:30:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.718 07:30:36 -- setup/hugepages.sh@27 -- # local node 00:03:10.718 07:30:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.718 07:30:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.718 07:30:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:10.718 07:30:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.718 07:30:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.718 07:30:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.718 07:30:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.718 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.718 07:30:36 -- setup/common.sh@18 -- # local node=0 00:03:10.718 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:10.718 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.718 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.718 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.718 07:30:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.718 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.718 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8110312 kB' 'MemUsed: 4128792 kB' 'SwapCached: 0 kB' 'Active: 456724 kB' 'Inactive: 1259140 kB' 'Active(anon): 128892 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 1597456 kB' 'Mapped: 50880 kB' 'AnonPages: 120000 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62036 kB' 'Slab: 154156 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.718 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.718 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.978 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.978 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # continue 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.979 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.979 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.979 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:10.979 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:10.979 07:30:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.979 07:30:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.979 node0=1024 expecting 1024 00:03:10.979 ************************************ 00:03:10.979 END TEST even_2G_alloc 00:03:10.979 ************************************ 00:03:10.979 07:30:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.979 07:30:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.979 07:30:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.979 07:30:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.979 00:03:10.979 real 0m0.626s 00:03:10.979 user 0m0.292s 00:03:10.979 sys 0m0.326s 00:03:10.979 07:30:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:10.979 07:30:36 -- common/autotest_common.sh@10 -- # set +x 00:03:10.979 07:30:36 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:10.979 07:30:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.979 07:30:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.979 07:30:36 -- common/autotest_common.sh@10 -- # set +x 00:03:10.979 ************************************ 00:03:10.979 START TEST odd_alloc 00:03:10.979 ************************************ 00:03:10.979 07:30:36 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:10.979 07:30:36 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:10.979 07:30:36 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:10.979 07:30:36 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.979 07:30:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.979 07:30:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:10.979 07:30:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.979 07:30:36 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.979 07:30:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.979 07:30:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:10.979 07:30:36 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:10.979 07:30:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.979 07:30:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.979 07:30:36 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.979 07:30:36 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.979 07:30:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.979 07:30:36 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:10.979 07:30:36 -- setup/hugepages.sh@83 -- # : 0 00:03:10.979 07:30:36 -- setup/hugepages.sh@84 -- # : 0 00:03:10.979 07:30:36 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.979 07:30:36 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:10.979 07:30:36 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:10.979 07:30:36 -- setup/hugepages.sh@160 -- # setup output 00:03:10.979 07:30:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.979 07:30:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:11.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:11.240 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:11.240 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:11.240 07:30:36 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:11.240 07:30:36 -- setup/hugepages.sh@89 -- # local node 00:03:11.240 07:30:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.240 07:30:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.240 07:30:36 -- setup/hugepages.sh@92 -- # local surp 00:03:11.240 07:30:36 -- setup/hugepages.sh@93 -- # local resv 00:03:11.240 07:30:36 -- setup/hugepages.sh@94 -- # local anon 00:03:11.240 07:30:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.240 07:30:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.240 07:30:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.240 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:11.240 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:11.240 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.240 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.240 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.240 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.240 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.240 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105248 kB' 'MemAvailable: 9486088 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456696 kB' 'Inactive: 1259140 kB' 'Active(anon): 128864 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119996 kB' 'Mapped: 50992 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154128 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92092 kB' 'KernelStack: 6440 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.240 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.240 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.241 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:11.241 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:11.241 07:30:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.241 07:30:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.241 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.241 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:11.241 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:11.241 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.241 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.241 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.241 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.241 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.241 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105664 kB' 'MemAvailable: 9486504 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456760 kB' 'Inactive: 1259140 kB' 'Active(anon): 128928 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154124 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92088 kB' 'KernelStack: 6464 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.241 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.241 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.242 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.242 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.505 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.505 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:11.505 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:11.505 07:30:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.505 07:30:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.505 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.505 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:11.505 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:11.505 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.505 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.505 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.505 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.505 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.505 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.505 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105416 kB' 'MemAvailable: 9486256 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456684 kB' 'Inactive: 1259140 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120016 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154116 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92080 kB' 'KernelStack: 6464 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.506 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.506 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.507 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:11.507 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:11.507 nr_hugepages=1025 00:03:11.507 resv_hugepages=0 00:03:11.507 surplus_hugepages=0 00:03:11.507 07:30:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.507 07:30:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:11.507 07:30:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.507 07:30:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.507 anon_hugepages=0 00:03:11.507 07:30:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.507 07:30:36 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.507 07:30:36 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:11.507 07:30:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.507 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.507 07:30:36 -- setup/common.sh@18 -- # local node= 00:03:11.507 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:11.507 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.507 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.507 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.507 07:30:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.507 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.507 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105416 kB' 'MemAvailable: 9486256 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456624 kB' 'Inactive: 1259140 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154116 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92080 kB' 'KernelStack: 6464 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.507 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.507 07:30:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.508 07:30:36 -- setup/common.sh@33 -- # echo 1025 00:03:11.508 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:11.508 07:30:36 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.508 07:30:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.508 07:30:36 -- setup/hugepages.sh@27 -- # local node 00:03:11.508 07:30:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.508 07:30:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:11.508 07:30:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:11.508 07:30:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.508 07:30:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.508 07:30:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.508 07:30:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.508 07:30:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.508 07:30:36 -- setup/common.sh@18 -- # local node=0 00:03:11.508 07:30:36 -- setup/common.sh@19 -- # local var val 00:03:11.508 07:30:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.508 07:30:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.508 07:30:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.508 07:30:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.508 07:30:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.508 07:30:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.508 07:30:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8105436 kB' 'MemUsed: 4133668 kB' 'SwapCached: 0 kB' 'Active: 456684 kB' 'Inactive: 1259140 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1597456 kB' 'Mapped: 50856 kB' 'AnonPages: 120016 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62036 kB' 'Slab: 154116 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.508 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.508 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # continue 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.509 07:30:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.509 07:30:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.509 07:30:36 -- setup/common.sh@33 -- # echo 0 00:03:11.509 07:30:36 -- setup/common.sh@33 -- # return 0 00:03:11.509 07:30:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.510 07:30:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.510 07:30:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.510 07:30:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.510 node0=1025 expecting 1025 00:03:11.510 07:30:36 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:11.510 07:30:36 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:11.510 ************************************ 00:03:11.510 END TEST odd_alloc 00:03:11.510 ************************************ 00:03:11.510 00:03:11.510 real 0m0.604s 00:03:11.510 user 0m0.290s 00:03:11.510 sys 0m0.314s 00:03:11.510 07:30:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:11.510 07:30:36 -- common/autotest_common.sh@10 -- # set +x 00:03:11.510 07:30:37 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:11.510 07:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:11.510 07:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:11.510 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:03:11.510 ************************************ 00:03:11.510 START TEST custom_alloc 00:03:11.510 ************************************ 00:03:11.510 07:30:37 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:11.510 07:30:37 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:11.510 07:30:37 -- setup/hugepages.sh@169 -- # local node 00:03:11.510 07:30:37 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:11.510 07:30:37 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:11.510 07:30:37 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:11.510 07:30:37 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:11.510 07:30:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.510 07:30:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.510 07:30:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.510 07:30:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.510 07:30:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.510 07:30:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.510 07:30:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:11.510 07:30:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.510 07:30:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.510 07:30:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.510 07:30:37 -- setup/hugepages.sh@83 -- # : 0 00:03:11.510 07:30:37 -- setup/hugepages.sh@84 -- # : 0 00:03:11.510 07:30:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:11.510 07:30:37 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.510 07:30:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.510 07:30:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:11.510 07:30:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.510 07:30:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.510 07:30:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.510 07:30:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:11.510 07:30:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.510 07:30:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.510 07:30:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:11.510 07:30:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.510 07:30:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.510 07:30:37 -- setup/hugepages.sh@78 -- # return 0 00:03:11.510 07:30:37 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:11.510 07:30:37 -- setup/hugepages.sh@187 -- # setup output 00:03:11.510 07:30:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.510 07:30:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:11.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:12.032 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.032 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.032 07:30:37 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:12.032 07:30:37 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:12.032 07:30:37 -- setup/hugepages.sh@89 -- # local node 00:03:12.032 07:30:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.032 07:30:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.032 07:30:37 -- setup/hugepages.sh@92 -- # local surp 00:03:12.032 07:30:37 -- setup/hugepages.sh@93 -- # local resv 00:03:12.032 07:30:37 -- setup/hugepages.sh@94 -- # local anon 00:03:12.032 07:30:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.033 07:30:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.033 07:30:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.033 07:30:37 -- setup/common.sh@18 -- # local node= 00:03:12.033 07:30:37 -- setup/common.sh@19 -- # local var val 00:03:12.033 07:30:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.033 07:30:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.033 07:30:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.033 07:30:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.033 07:30:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.033 07:30:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9158292 kB' 'MemAvailable: 10539132 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 457332 kB' 'Inactive: 1259140 kB' 'Active(anon): 129500 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120616 kB' 'Mapped: 50952 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154108 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92072 kB' 'KernelStack: 6504 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.033 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.033 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.034 07:30:37 -- setup/common.sh@33 -- # echo 0 00:03:12.034 07:30:37 -- setup/common.sh@33 -- # return 0 00:03:12.034 07:30:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.034 07:30:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.034 07:30:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.034 07:30:37 -- setup/common.sh@18 -- # local node= 00:03:12.034 07:30:37 -- setup/common.sh@19 -- # local var val 00:03:12.034 07:30:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.034 07:30:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.034 07:30:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.034 07:30:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.034 07:30:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.034 07:30:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9158040 kB' 'MemAvailable: 10538880 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456540 kB' 'Inactive: 1259140 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120116 kB' 'Mapped: 50908 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154104 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92068 kB' 'KernelStack: 6456 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.034 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.034 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.035 07:30:37 -- setup/common.sh@33 -- # echo 0 00:03:12.035 07:30:37 -- setup/common.sh@33 -- # return 0 00:03:12.035 07:30:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.035 07:30:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.035 07:30:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.035 07:30:37 -- setup/common.sh@18 -- # local node= 00:03:12.035 07:30:37 -- setup/common.sh@19 -- # local var val 00:03:12.035 07:30:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.035 07:30:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.035 07:30:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.035 07:30:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.035 07:30:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.035 07:30:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9158040 kB' 'MemAvailable: 10538880 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456676 kB' 'Inactive: 1259140 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154112 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92076 kB' 'KernelStack: 6464 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.035 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.035 07:30:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.036 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.036 07:30:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.037 07:30:37 -- setup/common.sh@33 -- # echo 0 00:03:12.037 07:30:37 -- setup/common.sh@33 -- # return 0 00:03:12.037 nr_hugepages=512 00:03:12.037 resv_hugepages=0 00:03:12.037 surplus_hugepages=0 00:03:12.037 anon_hugepages=0 00:03:12.037 07:30:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.037 07:30:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:12.037 07:30:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.037 07:30:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.037 07:30:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.037 07:30:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:12.037 07:30:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:12.037 07:30:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.037 07:30:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.037 07:30:37 -- setup/common.sh@18 -- # local node= 00:03:12.037 07:30:37 -- setup/common.sh@19 -- # local var val 00:03:12.037 07:30:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.037 07:30:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.037 07:30:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.037 07:30:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.037 07:30:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.037 07:30:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9158040 kB' 'MemAvailable: 10538880 kB' 'Buffers: 2684 kB' 'Cached: 1594772 kB' 'SwapCached: 0 kB' 'Active: 456372 kB' 'Inactive: 1259140 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 50856 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154108 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92072 kB' 'KernelStack: 6448 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.037 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.037 07:30:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.038 07:30:37 -- setup/common.sh@33 -- # echo 512 00:03:12.038 07:30:37 -- setup/common.sh@33 -- # return 0 00:03:12.038 07:30:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:12.038 07:30:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.038 07:30:37 -- setup/hugepages.sh@27 -- # local node 00:03:12.038 07:30:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.038 07:30:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.038 07:30:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:12.038 07:30:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.038 07:30:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.038 07:30:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.038 07:30:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.038 07:30:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.038 07:30:37 -- setup/common.sh@18 -- # local node=0 00:03:12.038 07:30:37 -- setup/common.sh@19 -- # local var val 00:03:12.038 07:30:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.038 07:30:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.038 07:30:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.038 07:30:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.038 07:30:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.038 07:30:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9158040 kB' 'MemUsed: 3081064 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 1259140 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1597456 kB' 'Mapped: 50856 kB' 'AnonPages: 119888 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62036 kB' 'Slab: 154108 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.038 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.038 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.039 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.039 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # continue 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.299 07:30:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.299 07:30:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.299 07:30:37 -- setup/common.sh@33 -- # echo 0 00:03:12.299 07:30:37 -- setup/common.sh@33 -- # return 0 00:03:12.299 07:30:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.299 07:30:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.299 07:30:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.299 node0=512 expecting 512 00:03:12.299 ************************************ 00:03:12.299 END TEST custom_alloc 00:03:12.299 ************************************ 00:03:12.299 07:30:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.299 07:30:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.299 07:30:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:12.299 00:03:12.299 real 0m0.609s 00:03:12.299 user 0m0.298s 00:03:12.299 sys 0m0.310s 00:03:12.299 07:30:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:12.299 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:03:12.299 07:30:37 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:12.299 07:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.299 07:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.299 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:03:12.299 ************************************ 00:03:12.299 START TEST no_shrink_alloc 00:03:12.299 ************************************ 00:03:12.299 07:30:37 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:12.299 07:30:37 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:12.299 07:30:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.299 07:30:37 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.299 07:30:37 -- setup/hugepages.sh@51 -- # shift 00:03:12.299 07:30:37 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.299 07:30:37 -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.299 07:30:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.299 07:30:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.299 07:30:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.299 07:30:37 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.299 07:30:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.299 07:30:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.299 07:30:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:12.299 07:30:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.299 07:30:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.299 07:30:37 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.299 07:30:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.299 07:30:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.299 07:30:37 -- setup/hugepages.sh@73 -- # return 0 00:03:12.299 07:30:37 -- setup/hugepages.sh@198 -- # setup output 00:03:12.299 07:30:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.299 07:30:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:12.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:12.560 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.560 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.560 07:30:38 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:12.560 07:30:38 -- setup/hugepages.sh@89 -- # local node 00:03:12.560 07:30:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.560 07:30:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.560 07:30:38 -- setup/hugepages.sh@92 -- # local surp 00:03:12.560 07:30:38 -- setup/hugepages.sh@93 -- # local resv 00:03:12.560 07:30:38 -- setup/hugepages.sh@94 -- # local anon 00:03:12.560 07:30:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.560 07:30:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.560 07:30:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.560 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:12.560 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:12.560 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.560 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.560 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.560 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.560 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.560 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8110464 kB' 'MemAvailable: 9491308 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 457088 kB' 'Inactive: 1259144 kB' 'Active(anon): 129256 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120352 kB' 'Mapped: 50972 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154132 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92096 kB' 'KernelStack: 6456 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.560 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.560 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.561 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:12.561 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:12.561 07:30:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.561 07:30:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.561 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.561 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:12.561 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:12.561 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.561 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.561 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.561 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.561 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.561 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8110720 kB' 'MemAvailable: 9491564 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 456716 kB' 'Inactive: 1259144 kB' 'Active(anon): 128884 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119972 kB' 'Mapped: 50860 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154148 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92112 kB' 'KernelStack: 6464 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.561 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.561 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.562 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.562 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.824 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.824 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.824 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:12.824 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:12.824 07:30:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.824 07:30:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.824 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.824 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:12.824 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:12.824 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.824 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.824 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.824 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.824 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.824 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.825 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8110720 kB' 'MemAvailable: 9491564 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 456740 kB' 'Inactive: 1259144 kB' 'Active(anon): 128908 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 50860 kB' 'Shmem: 10484 kB' 'KReclaimable: 62036 kB' 'Slab: 154136 kB' 'SReclaimable: 62036 kB' 'SUnreclaim: 92100 kB' 'KernelStack: 6448 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.825 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.825 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.826 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:12.826 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:12.826 07:30:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.826 07:30:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.826 nr_hugepages=1024 00:03:12.826 resv_hugepages=0 00:03:12.826 surplus_hugepages=0 00:03:12.826 anon_hugepages=0 00:03:12.826 07:30:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.826 07:30:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.826 07:30:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.826 07:30:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.826 07:30:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.826 07:30:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.826 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.826 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:12.826 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:12.826 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.826 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.826 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.826 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.826 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.826 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8111296 kB' 'MemAvailable: 9492136 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 453748 kB' 'Inactive: 1259144 kB' 'Active(anon): 125916 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116968 kB' 'Mapped: 50040 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 154052 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 92024 kB' 'KernelStack: 6336 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.826 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.826 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.827 07:30:38 -- setup/common.sh@33 -- # echo 1024 00:03:12.827 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:12.827 07:30:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.827 07:30:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.827 07:30:38 -- setup/hugepages.sh@27 -- # local node 00:03:12.827 07:30:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.827 07:30:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.827 07:30:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:12.827 07:30:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.827 07:30:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.827 07:30:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.827 07:30:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.827 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.827 07:30:38 -- setup/common.sh@18 -- # local node=0 00:03:12.827 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:12.827 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.827 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.827 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.827 07:30:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.827 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.827 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.827 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.827 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8111296 kB' 'MemUsed: 4127808 kB' 'SwapCached: 0 kB' 'Active: 453848 kB' 'Inactive: 1259144 kB' 'Active(anon): 126016 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1597460 kB' 'Mapped: 50040 kB' 'AnonPages: 117104 kB' 'Shmem: 10484 kB' 'KernelStack: 6352 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 154052 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 92024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.827 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # continue 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.828 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.828 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.828 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:12.828 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:12.828 07:30:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.828 07:30:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.828 07:30:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.828 node0=1024 expecting 1024 00:03:12.828 07:30:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.828 07:30:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.828 07:30:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.828 07:30:38 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:12.828 07:30:38 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:12.828 07:30:38 -- setup/hugepages.sh@202 -- # setup output 00:03:12.828 07:30:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.828 07:30:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:13.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:13.088 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.088 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.088 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:13.088 07:30:38 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:13.088 07:30:38 -- setup/hugepages.sh@89 -- # local node 00:03:13.088 07:30:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.088 07:30:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.088 07:30:38 -- setup/hugepages.sh@92 -- # local surp 00:03:13.088 07:30:38 -- setup/hugepages.sh@93 -- # local resv 00:03:13.088 07:30:38 -- setup/hugepages.sh@94 -- # local anon 00:03:13.088 07:30:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.088 07:30:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.088 07:30:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.088 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:13.088 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:13.088 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.088 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.088 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.088 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.088 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.088 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.088 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.088 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.088 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8112148 kB' 'MemAvailable: 9492988 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 454204 kB' 'Inactive: 1259144 kB' 'Active(anon): 126372 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117472 kB' 'Mapped: 50076 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 153912 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 91884 kB' 'KernelStack: 6456 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.352 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.352 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.353 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:13.353 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:13.353 07:30:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:13.353 07:30:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.353 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.353 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:13.353 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:13.353 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.353 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.353 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.353 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.353 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.353 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8112148 kB' 'MemAvailable: 9492988 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 453820 kB' 'Inactive: 1259144 kB' 'Active(anon): 125988 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117076 kB' 'Mapped: 49924 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 153920 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 91892 kB' 'KernelStack: 6352 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.353 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.353 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.354 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:13.354 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:13.354 07:30:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:13.354 07:30:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.354 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.354 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:13.354 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:13.354 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.354 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.354 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.354 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.354 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.354 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8112148 kB' 'MemAvailable: 9492988 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 453820 kB' 'Inactive: 1259144 kB' 'Active(anon): 125988 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117088 kB' 'Mapped: 50040 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 153916 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 91888 kB' 'KernelStack: 6352 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 305092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.354 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.354 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.355 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.355 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.356 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:13.356 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:13.356 nr_hugepages=1024 00:03:13.356 07:30:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:13.356 07:30:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.356 resv_hugepages=0 00:03:13.356 07:30:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.356 surplus_hugepages=0 00:03:13.356 anon_hugepages=0 00:03:13.356 07:30:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.356 07:30:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.356 07:30:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.356 07:30:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.356 07:30:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.356 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.356 07:30:38 -- setup/common.sh@18 -- # local node= 00:03:13.356 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:13.356 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.356 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.356 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.356 07:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.356 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.356 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8112148 kB' 'MemAvailable: 9492988 kB' 'Buffers: 2684 kB' 'Cached: 1594776 kB' 'SwapCached: 0 kB' 'Active: 453908 kB' 'Inactive: 1259144 kB' 'Active(anon): 126076 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117228 kB' 'Mapped: 50296 kB' 'Shmem: 10484 kB' 'KReclaimable: 62028 kB' 'Slab: 153916 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 91888 kB' 'KernelStack: 6352 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 304724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.356 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.356 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.357 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.357 07:30:38 -- setup/common.sh@33 -- # echo 1024 00:03:13.357 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:13.357 07:30:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.357 07:30:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.357 07:30:38 -- setup/hugepages.sh@27 -- # local node 00:03:13.357 07:30:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.357 07:30:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.357 07:30:38 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:13.357 07:30:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.357 07:30:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.357 07:30:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.357 07:30:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.357 07:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.357 07:30:38 -- setup/common.sh@18 -- # local node=0 00:03:13.357 07:30:38 -- setup/common.sh@19 -- # local var val 00:03:13.357 07:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.357 07:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.357 07:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.357 07:30:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.357 07:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.357 07:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.357 07:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8112148 kB' 'MemUsed: 4126956 kB' 'SwapCached: 0 kB' 'Active: 453784 kB' 'Inactive: 1259144 kB' 'Active(anon): 125952 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1259144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1597460 kB' 'Mapped: 50036 kB' 'AnonPages: 117112 kB' 'Shmem: 10484 kB' 'KernelStack: 6352 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62028 kB' 'Slab: 153900 kB' 'SReclaimable: 62028 kB' 'SUnreclaim: 91872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.357 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # continue 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.358 07:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.358 07:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.358 07:30:38 -- setup/common.sh@33 -- # echo 0 00:03:13.358 07:30:38 -- setup/common.sh@33 -- # return 0 00:03:13.358 07:30:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.358 07:30:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.358 07:30:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.358 07:30:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.358 07:30:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:13.358 node0=1024 expecting 1024 00:03:13.358 07:30:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.358 00:03:13.358 real 0m1.185s 00:03:13.358 user 0m0.604s 00:03:13.358 sys 0m0.592s 00:03:13.358 07:30:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:13.358 07:30:38 -- common/autotest_common.sh@10 -- # set +x 00:03:13.358 ************************************ 00:03:13.358 END TEST no_shrink_alloc 00:03:13.358 ************************************ 00:03:13.358 07:30:38 -- setup/hugepages.sh@217 -- # clear_hp 00:03:13.358 07:30:38 -- setup/hugepages.sh@37 -- # local node hp 00:03:13.358 07:30:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.359 07:30:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.359 07:30:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:13.359 07:30:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.359 07:30:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:13.359 07:30:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:13.359 07:30:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:13.359 00:03:13.359 real 0m5.234s 00:03:13.359 user 0m2.462s 00:03:13.359 sys 0m2.629s 00:03:13.359 07:30:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:13.359 07:30:38 -- common/autotest_common.sh@10 -- # set +x 00:03:13.359 ************************************ 00:03:13.359 END TEST hugepages 00:03:13.359 ************************************ 00:03:13.618 07:30:38 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:13.618 07:30:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.618 07:30:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.618 07:30:38 -- common/autotest_common.sh@10 -- # set +x 00:03:13.618 ************************************ 00:03:13.618 START TEST driver 00:03:13.618 ************************************ 00:03:13.618 07:30:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:13.618 * Looking for test storage... 00:03:13.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:13.618 07:30:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:13.618 07:30:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:13.618 07:30:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:13.618 07:30:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:13.618 07:30:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:13.618 07:30:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:13.618 07:30:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:13.618 07:30:39 -- scripts/common.sh@335 -- # IFS=.-: 00:03:13.618 07:30:39 -- scripts/common.sh@335 -- # read -ra ver1 00:03:13.618 07:30:39 -- scripts/common.sh@336 -- # IFS=.-: 00:03:13.618 07:30:39 -- scripts/common.sh@336 -- # read -ra ver2 00:03:13.618 07:30:39 -- scripts/common.sh@337 -- # local 'op=<' 00:03:13.618 07:30:39 -- scripts/common.sh@339 -- # ver1_l=2 00:03:13.618 07:30:39 -- scripts/common.sh@340 -- # ver2_l=1 00:03:13.618 07:30:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:13.618 07:30:39 -- scripts/common.sh@343 -- # case "$op" in 00:03:13.618 07:30:39 -- scripts/common.sh@344 -- # : 1 00:03:13.618 07:30:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:13.618 07:30:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:13.618 07:30:39 -- scripts/common.sh@364 -- # decimal 1 00:03:13.618 07:30:39 -- scripts/common.sh@352 -- # local d=1 00:03:13.618 07:30:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:13.619 07:30:39 -- scripts/common.sh@354 -- # echo 1 00:03:13.619 07:30:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:13.619 07:30:39 -- scripts/common.sh@365 -- # decimal 2 00:03:13.619 07:30:39 -- scripts/common.sh@352 -- # local d=2 00:03:13.619 07:30:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:13.619 07:30:39 -- scripts/common.sh@354 -- # echo 2 00:03:13.619 07:30:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:13.619 07:30:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:13.619 07:30:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:13.619 07:30:39 -- scripts/common.sh@367 -- # return 0 00:03:13.619 07:30:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:13.619 07:30:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:13.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.619 --rc genhtml_branch_coverage=1 00:03:13.619 --rc genhtml_function_coverage=1 00:03:13.619 --rc genhtml_legend=1 00:03:13.619 --rc geninfo_all_blocks=1 00:03:13.619 --rc geninfo_unexecuted_blocks=1 00:03:13.619 00:03:13.619 ' 00:03:13.619 07:30:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:13.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.619 --rc genhtml_branch_coverage=1 00:03:13.619 --rc genhtml_function_coverage=1 00:03:13.619 --rc genhtml_legend=1 00:03:13.619 --rc geninfo_all_blocks=1 00:03:13.619 --rc geninfo_unexecuted_blocks=1 00:03:13.619 00:03:13.619 ' 00:03:13.619 07:30:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:13.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.619 --rc genhtml_branch_coverage=1 00:03:13.619 --rc genhtml_function_coverage=1 00:03:13.619 --rc genhtml_legend=1 00:03:13.619 --rc geninfo_all_blocks=1 00:03:13.619 --rc geninfo_unexecuted_blocks=1 00:03:13.619 00:03:13.619 ' 00:03:13.619 07:30:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:13.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.619 --rc genhtml_branch_coverage=1 00:03:13.619 --rc genhtml_function_coverage=1 00:03:13.619 --rc genhtml_legend=1 00:03:13.619 --rc geninfo_all_blocks=1 00:03:13.619 --rc geninfo_unexecuted_blocks=1 00:03:13.619 00:03:13.619 ' 00:03:13.619 07:30:39 -- setup/driver.sh@68 -- # setup reset 00:03:13.619 07:30:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.619 07:30:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:14.188 07:30:39 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:14.188 07:30:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.188 07:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.188 07:30:39 -- common/autotest_common.sh@10 -- # set +x 00:03:14.188 ************************************ 00:03:14.188 START TEST guess_driver 00:03:14.188 ************************************ 00:03:14.188 07:30:39 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:14.188 07:30:39 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:14.188 07:30:39 -- setup/driver.sh@47 -- # local fail=0 00:03:14.188 07:30:39 -- setup/driver.sh@49 -- # pick_driver 00:03:14.188 07:30:39 -- setup/driver.sh@36 -- # vfio 00:03:14.188 07:30:39 -- setup/driver.sh@21 -- # local iommu_grups 00:03:14.188 07:30:39 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:14.188 07:30:39 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:14.188 07:30:39 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:14.188 07:30:39 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:14.188 07:30:39 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:14.188 07:30:39 -- setup/driver.sh@32 -- # return 1 00:03:14.188 07:30:39 -- setup/driver.sh@38 -- # uio 00:03:14.188 07:30:39 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:14.188 07:30:39 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:14.188 07:30:39 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:14.188 07:30:39 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:14.188 07:30:39 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:14.188 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:14.188 07:30:39 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:14.188 07:30:39 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:14.188 07:30:39 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:14.188 Looking for driver=uio_pci_generic 00:03:14.188 07:30:39 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:14.188 07:30:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.188 07:30:39 -- setup/driver.sh@45 -- # setup output config 00:03:14.188 07:30:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.188 07:30:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:15.125 07:30:40 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:15.126 07:30:40 -- setup/driver.sh@58 -- # continue 00:03:15.126 07:30:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.126 07:30:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.126 07:30:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:15.126 07:30:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.126 07:30:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:15.126 07:30:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:15.126 07:30:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.126 07:30:40 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:15.126 07:30:40 -- setup/driver.sh@65 -- # setup reset 00:03:15.126 07:30:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.126 07:30:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:15.693 ************************************ 00:03:15.693 END TEST guess_driver 00:03:15.693 ************************************ 00:03:15.693 00:03:15.693 real 0m1.434s 00:03:15.693 user 0m0.558s 00:03:15.693 sys 0m0.882s 00:03:15.693 07:30:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:15.693 07:30:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.693 00:03:15.693 real 0m2.229s 00:03:15.693 user 0m0.877s 00:03:15.693 sys 0m1.425s 00:03:15.693 07:30:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:15.693 07:30:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.693 ************************************ 00:03:15.693 END TEST driver 00:03:15.693 ************************************ 00:03:15.694 07:30:41 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:15.694 07:30:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.694 07:30:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.694 07:30:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.694 ************************************ 00:03:15.694 START TEST devices 00:03:15.694 ************************************ 00:03:15.694 07:30:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:15.953 * Looking for test storage... 00:03:15.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:15.953 07:30:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:15.953 07:30:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:15.953 07:30:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:15.953 07:30:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:15.953 07:30:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:15.953 07:30:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:15.953 07:30:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:15.953 07:30:41 -- scripts/common.sh@335 -- # IFS=.-: 00:03:15.953 07:30:41 -- scripts/common.sh@335 -- # read -ra ver1 00:03:15.953 07:30:41 -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.953 07:30:41 -- scripts/common.sh@336 -- # read -ra ver2 00:03:15.954 07:30:41 -- scripts/common.sh@337 -- # local 'op=<' 00:03:15.954 07:30:41 -- scripts/common.sh@339 -- # ver1_l=2 00:03:15.954 07:30:41 -- scripts/common.sh@340 -- # ver2_l=1 00:03:15.954 07:30:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:15.954 07:30:41 -- scripts/common.sh@343 -- # case "$op" in 00:03:15.954 07:30:41 -- scripts/common.sh@344 -- # : 1 00:03:15.954 07:30:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:15.954 07:30:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.954 07:30:41 -- scripts/common.sh@364 -- # decimal 1 00:03:15.954 07:30:41 -- scripts/common.sh@352 -- # local d=1 00:03:15.954 07:30:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.954 07:30:41 -- scripts/common.sh@354 -- # echo 1 00:03:15.954 07:30:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:15.954 07:30:41 -- scripts/common.sh@365 -- # decimal 2 00:03:15.954 07:30:41 -- scripts/common.sh@352 -- # local d=2 00:03:15.954 07:30:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.954 07:30:41 -- scripts/common.sh@354 -- # echo 2 00:03:15.954 07:30:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:15.954 07:30:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:15.954 07:30:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:15.954 07:30:41 -- scripts/common.sh@367 -- # return 0 00:03:15.954 07:30:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.954 07:30:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.954 --rc genhtml_branch_coverage=1 00:03:15.954 --rc genhtml_function_coverage=1 00:03:15.954 --rc genhtml_legend=1 00:03:15.954 --rc geninfo_all_blocks=1 00:03:15.954 --rc geninfo_unexecuted_blocks=1 00:03:15.954 00:03:15.954 ' 00:03:15.954 07:30:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.954 --rc genhtml_branch_coverage=1 00:03:15.954 --rc genhtml_function_coverage=1 00:03:15.954 --rc genhtml_legend=1 00:03:15.954 --rc geninfo_all_blocks=1 00:03:15.954 --rc geninfo_unexecuted_blocks=1 00:03:15.954 00:03:15.954 ' 00:03:15.954 07:30:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.954 --rc genhtml_branch_coverage=1 00:03:15.954 --rc genhtml_function_coverage=1 00:03:15.954 --rc genhtml_legend=1 00:03:15.954 --rc geninfo_all_blocks=1 00:03:15.954 --rc geninfo_unexecuted_blocks=1 00:03:15.954 00:03:15.954 ' 00:03:15.954 07:30:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.954 --rc genhtml_branch_coverage=1 00:03:15.954 --rc genhtml_function_coverage=1 00:03:15.954 --rc genhtml_legend=1 00:03:15.954 --rc geninfo_all_blocks=1 00:03:15.954 --rc geninfo_unexecuted_blocks=1 00:03:15.954 00:03:15.954 ' 00:03:15.954 07:30:41 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:15.954 07:30:41 -- setup/devices.sh@192 -- # setup reset 00:03:15.954 07:30:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.954 07:30:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:16.891 07:30:42 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:16.891 07:30:42 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:16.891 07:30:42 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:16.891 07:30:42 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:16.891 07:30:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:16.891 07:30:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:16.891 07:30:42 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:16.891 07:30:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.891 07:30:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:16.891 07:30:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:16.891 07:30:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:16.891 07:30:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:16.891 07:30:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:16.891 07:30:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:16.891 07:30:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:16.891 07:30:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:16.891 07:30:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:16.891 07:30:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:16.891 07:30:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:16.891 07:30:42 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:16.891 07:30:42 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:16.891 07:30:42 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:16.891 07:30:42 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:16.891 07:30:42 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:16.891 07:30:42 -- setup/devices.sh@196 -- # blocks=() 00:03:16.891 07:30:42 -- setup/devices.sh@196 -- # declare -a blocks 00:03:16.891 07:30:42 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:16.891 07:30:42 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:16.891 07:30:42 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:16.891 07:30:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:16.891 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:16.891 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:16.891 07:30:42 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:16.892 07:30:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:16.892 07:30:42 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:16.892 07:30:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:16.892 No valid GPT data, bailing 00:03:16.892 07:30:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:16.892 07:30:42 -- scripts/common.sh@393 -- # pt= 00:03:16.892 07:30:42 -- scripts/common.sh@394 -- # return 1 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:16.892 07:30:42 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:16.892 07:30:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:16.892 07:30:42 -- setup/common.sh@80 -- # echo 5368709120 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:16.892 07:30:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:16.892 07:30:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:16.892 07:30:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:16.892 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:16.892 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:16.892 07:30:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:16.892 07:30:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:16.892 07:30:42 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:16.892 07:30:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:16.892 No valid GPT data, bailing 00:03:16.892 07:30:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:16.892 07:30:42 -- scripts/common.sh@393 -- # pt= 00:03:16.892 07:30:42 -- scripts/common.sh@394 -- # return 1 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:16.892 07:30:42 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:16.892 07:30:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:16.892 07:30:42 -- setup/common.sh@80 -- # echo 4294967296 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:16.892 07:30:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:16.892 07:30:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:16.892 07:30:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:16.892 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:16.892 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:16.892 07:30:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:16.892 07:30:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:16.892 07:30:42 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:16.892 07:30:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:16.892 No valid GPT data, bailing 00:03:16.892 07:30:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:16.892 07:30:42 -- scripts/common.sh@393 -- # pt= 00:03:16.892 07:30:42 -- scripts/common.sh@394 -- # return 1 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:16.892 07:30:42 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:16.892 07:30:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:16.892 07:30:42 -- setup/common.sh@80 -- # echo 4294967296 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:16.892 07:30:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:16.892 07:30:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:16.892 07:30:42 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:16.892 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:16.892 07:30:42 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:16.892 07:30:42 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:16.892 07:30:42 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:16.892 07:30:42 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:16.892 07:30:42 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:16.892 07:30:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:17.151 No valid GPT data, bailing 00:03:17.151 07:30:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:17.151 07:30:42 -- scripts/common.sh@393 -- # pt= 00:03:17.151 07:30:42 -- scripts/common.sh@394 -- # return 1 00:03:17.151 07:30:42 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:17.151 07:30:42 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:17.151 07:30:42 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:17.151 07:30:42 -- setup/common.sh@80 -- # echo 4294967296 00:03:17.151 07:30:42 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:17.151 07:30:42 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:17.151 07:30:42 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:17.151 07:30:42 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:17.151 07:30:42 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:17.151 07:30:42 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:17.151 07:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.151 07:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.151 07:30:42 -- common/autotest_common.sh@10 -- # set +x 00:03:17.151 ************************************ 00:03:17.151 START TEST nvme_mount 00:03:17.151 ************************************ 00:03:17.151 07:30:42 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:17.151 07:30:42 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:17.151 07:30:42 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:17.151 07:30:42 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:17.151 07:30:42 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:17.151 07:30:42 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:17.151 07:30:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:17.151 07:30:42 -- setup/common.sh@40 -- # local part_no=1 00:03:17.151 07:30:42 -- setup/common.sh@41 -- # local size=1073741824 00:03:17.151 07:30:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:17.151 07:30:42 -- setup/common.sh@44 -- # parts=() 00:03:17.151 07:30:42 -- setup/common.sh@44 -- # local parts 00:03:17.151 07:30:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:17.151 07:30:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.151 07:30:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:17.151 07:30:42 -- setup/common.sh@46 -- # (( part++ )) 00:03:17.151 07:30:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.151 07:30:42 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:17.151 07:30:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:17.151 07:30:42 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:18.090 Creating new GPT entries in memory. 00:03:18.090 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:18.090 other utilities. 00:03:18.090 07:30:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:18.090 07:30:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:18.090 07:30:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:18.090 07:30:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:18.090 07:30:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:19.029 Creating new GPT entries in memory. 00:03:19.029 The operation has completed successfully. 00:03:19.029 07:30:44 -- setup/common.sh@57 -- # (( part++ )) 00:03:19.029 07:30:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.029 07:30:44 -- setup/common.sh@62 -- # wait 52084 00:03:19.029 07:30:44 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:19.029 07:30:44 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:19.029 07:30:44 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:19.029 07:30:44 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:19.029 07:30:44 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:19.289 07:30:44 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:19.289 07:30:44 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:19.289 07:30:44 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:19.289 07:30:44 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:19.289 07:30:44 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:19.289 07:30:44 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:19.289 07:30:44 -- setup/devices.sh@53 -- # local found=0 00:03:19.289 07:30:44 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.289 07:30:44 -- setup/devices.sh@56 -- # : 00:03:19.289 07:30:44 -- setup/devices.sh@59 -- # local pci status 00:03:19.289 07:30:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.289 07:30:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:19.289 07:30:44 -- setup/devices.sh@47 -- # setup output config 00:03:19.289 07:30:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.289 07:30:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:19.289 07:30:44 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:19.289 07:30:44 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:19.289 07:30:44 -- setup/devices.sh@63 -- # found=1 00:03:19.289 07:30:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.289 07:30:44 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:19.289 07:30:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.875 07:30:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:19.875 07:30:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.875 07:30:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:19.875 07:30:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.875 07:30:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:19.875 07:30:45 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:19.875 07:30:45 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:19.875 07:30:45 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.875 07:30:45 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:19.875 07:30:45 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:19.875 07:30:45 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:19.875 07:30:45 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:19.875 07:30:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:19.875 07:30:45 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:19.875 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:19.875 07:30:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:19.875 07:30:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:20.133 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:20.133 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:20.133 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:20.133 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:20.133 07:30:45 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:20.133 07:30:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:20.134 07:30:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:20.134 07:30:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:20.134 07:30:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:20.134 07:30:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:20.134 07:30:45 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:20.134 07:30:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:20.134 07:30:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:20.134 07:30:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:20.134 07:30:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:20.134 07:30:45 -- setup/devices.sh@53 -- # local found=0 00:03:20.134 07:30:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.134 07:30:45 -- setup/devices.sh@56 -- # : 00:03:20.134 07:30:45 -- setup/devices.sh@59 -- # local pci status 00:03:20.134 07:30:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.134 07:30:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:20.134 07:30:45 -- setup/devices.sh@47 -- # setup output config 00:03:20.134 07:30:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.134 07:30:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:20.392 07:30:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:20.392 07:30:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:20.392 07:30:45 -- setup/devices.sh@63 -- # found=1 00:03:20.392 07:30:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.392 07:30:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:20.392 07:30:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.650 07:30:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:20.650 07:30:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.650 07:30:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:20.651 07:30:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.909 07:30:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:20.910 07:30:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:20.910 07:30:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:20.910 07:30:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.910 07:30:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:20.910 07:30:46 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:20.910 07:30:46 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:20.910 07:30:46 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:20.910 07:30:46 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:20.910 07:30:46 -- setup/devices.sh@50 -- # local mount_point= 00:03:20.910 07:30:46 -- setup/devices.sh@51 -- # local test_file= 00:03:20.910 07:30:46 -- setup/devices.sh@53 -- # local found=0 00:03:20.910 07:30:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:20.910 07:30:46 -- setup/devices.sh@59 -- # local pci status 00:03:20.910 07:30:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.910 07:30:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:20.910 07:30:46 -- setup/devices.sh@47 -- # setup output config 00:03:20.910 07:30:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.910 07:30:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:21.169 07:30:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:21.169 07:30:46 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:21.169 07:30:46 -- setup/devices.sh@63 -- # found=1 00:03:21.169 07:30:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.169 07:30:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:21.169 07:30:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.428 07:30:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:21.428 07:30:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.428 07:30:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:21.428 07:30:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.687 07:30:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.687 07:30:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:21.687 07:30:47 -- setup/devices.sh@68 -- # return 0 00:03:21.687 07:30:47 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:21.687 07:30:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:21.687 07:30:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:21.687 07:30:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:21.687 07:30:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:21.687 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:21.687 ************************************ 00:03:21.687 END TEST nvme_mount 00:03:21.687 ************************************ 00:03:21.687 00:03:21.687 real 0m4.549s 00:03:21.687 user 0m1.038s 00:03:21.687 sys 0m1.188s 00:03:21.687 07:30:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:21.687 07:30:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.688 07:30:47 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:21.688 07:30:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.688 07:30:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.688 07:30:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.688 ************************************ 00:03:21.688 START TEST dm_mount 00:03:21.688 ************************************ 00:03:21.688 07:30:47 -- common/autotest_common.sh@1114 -- # dm_mount 00:03:21.688 07:30:47 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:21.688 07:30:47 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:21.688 07:30:47 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:21.688 07:30:47 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:21.688 07:30:47 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:21.688 07:30:47 -- setup/common.sh@40 -- # local part_no=2 00:03:21.688 07:30:47 -- setup/common.sh@41 -- # local size=1073741824 00:03:21.688 07:30:47 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:21.688 07:30:47 -- setup/common.sh@44 -- # parts=() 00:03:21.688 07:30:47 -- setup/common.sh@44 -- # local parts 00:03:21.688 07:30:47 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:21.688 07:30:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.688 07:30:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.688 07:30:47 -- setup/common.sh@46 -- # (( part++ )) 00:03:21.688 07:30:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.688 07:30:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.688 07:30:47 -- setup/common.sh@46 -- # (( part++ )) 00:03:21.688 07:30:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.688 07:30:47 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:21.688 07:30:47 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:21.688 07:30:47 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:22.624 Creating new GPT entries in memory. 00:03:22.624 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:22.624 other utilities. 00:03:22.624 07:30:48 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:22.624 07:30:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.624 07:30:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.624 07:30:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.624 07:30:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:24.000 Creating new GPT entries in memory. 00:03:24.000 The operation has completed successfully. 00:03:24.000 07:30:49 -- setup/common.sh@57 -- # (( part++ )) 00:03:24.000 07:30:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.000 07:30:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:24.000 07:30:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:24.000 07:30:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:24.934 The operation has completed successfully. 00:03:24.934 07:30:50 -- setup/common.sh@57 -- # (( part++ )) 00:03:24.934 07:30:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.934 07:30:50 -- setup/common.sh@62 -- # wait 52543 00:03:24.934 07:30:50 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:24.934 07:30:50 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:24.934 07:30:50 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:24.934 07:30:50 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:24.934 07:30:50 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:24.934 07:30:50 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.934 07:30:50 -- setup/devices.sh@161 -- # break 00:03:24.934 07:30:50 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.934 07:30:50 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:24.934 07:30:50 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:24.934 07:30:50 -- setup/devices.sh@166 -- # dm=dm-0 00:03:24.934 07:30:50 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:24.934 07:30:50 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:24.934 07:30:50 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:24.934 07:30:50 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:24.934 07:30:50 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:24.934 07:30:50 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:24.934 07:30:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:24.934 07:30:50 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:24.934 07:30:50 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:24.934 07:30:50 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:24.934 07:30:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:24.934 07:30:50 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:24.934 07:30:50 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:24.934 07:30:50 -- setup/devices.sh@53 -- # local found=0 00:03:24.935 07:30:50 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:24.935 07:30:50 -- setup/devices.sh@56 -- # : 00:03:24.935 07:30:50 -- setup/devices.sh@59 -- # local pci status 00:03:24.935 07:30:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.935 07:30:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:24.935 07:30:50 -- setup/devices.sh@47 -- # setup output config 00:03:24.935 07:30:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.935 07:30:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:24.935 07:30:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:24.935 07:30:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:24.935 07:30:50 -- setup/devices.sh@63 -- # found=1 00:03:24.935 07:30:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.935 07:30:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:24.935 07:30:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.502 07:30:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:25.502 07:30:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.502 07:30:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:25.502 07:30:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.502 07:30:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:25.502 07:30:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:25.502 07:30:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:25.502 07:30:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:25.502 07:30:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:25.502 07:30:51 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:25.502 07:30:51 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:25.502 07:30:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:25.502 07:30:51 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:25.502 07:30:51 -- setup/devices.sh@50 -- # local mount_point= 00:03:25.502 07:30:51 -- setup/devices.sh@51 -- # local test_file= 00:03:25.502 07:30:51 -- setup/devices.sh@53 -- # local found=0 00:03:25.502 07:30:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:25.502 07:30:51 -- setup/devices.sh@59 -- # local pci status 00:03:25.502 07:30:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.502 07:30:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:25.502 07:30:51 -- setup/devices.sh@47 -- # setup output config 00:03:25.502 07:30:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.502 07:30:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:25.761 07:30:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:25.761 07:30:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:25.761 07:30:51 -- setup/devices.sh@63 -- # found=1 00:03:25.761 07:30:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.761 07:30:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:25.761 07:30:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.020 07:30:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:26.020 07:30:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.020 07:30:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:26.020 07:30:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.279 07:30:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.279 07:30:51 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:26.279 07:30:51 -- setup/devices.sh@68 -- # return 0 00:03:26.279 07:30:51 -- setup/devices.sh@187 -- # cleanup_dm 00:03:26.279 07:30:51 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:26.279 07:30:51 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:26.279 07:30:51 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:26.279 07:30:51 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.279 07:30:51 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:26.279 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:26.279 07:30:51 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:26.279 07:30:51 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:26.279 00:03:26.279 real 0m4.622s 00:03:26.279 user 0m0.723s 00:03:26.279 sys 0m0.832s 00:03:26.279 07:30:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:26.279 07:30:51 -- common/autotest_common.sh@10 -- # set +x 00:03:26.279 ************************************ 00:03:26.279 END TEST dm_mount 00:03:26.279 ************************************ 00:03:26.279 07:30:51 -- setup/devices.sh@1 -- # cleanup 00:03:26.279 07:30:51 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:26.279 07:30:51 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:26.279 07:30:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.279 07:30:51 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:26.279 07:30:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.279 07:30:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:26.538 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:26.538 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:26.538 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:26.538 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:26.538 07:30:52 -- setup/devices.sh@12 -- # cleanup_dm 00:03:26.538 07:30:52 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:26.538 07:30:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:26.538 07:30:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.538 07:30:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:26.538 07:30:52 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.538 07:30:52 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:26.538 00:03:26.538 real 0m10.830s 00:03:26.538 user 0m2.513s 00:03:26.538 sys 0m2.639s 00:03:26.538 07:30:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:26.538 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:03:26.538 ************************************ 00:03:26.538 END TEST devices 00:03:26.538 ************************************ 00:03:26.538 00:03:26.538 real 0m23.113s 00:03:26.538 user 0m7.981s 00:03:26.538 sys 0m9.382s 00:03:26.538 07:30:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:26.538 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:03:26.538 ************************************ 00:03:26.538 END TEST setup.sh 00:03:26.538 ************************************ 00:03:26.797 07:30:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:26.797 Hugepages 00:03:26.797 node hugesize free / total 00:03:26.797 node0 1048576kB 0 / 0 00:03:26.797 node0 2048kB 2048 / 2048 00:03:26.797 00:03:26.797 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.797 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:27.056 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:27.056 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:27.056 07:30:52 -- spdk/autotest.sh@128 -- # uname -s 00:03:27.056 07:30:52 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:03:27.056 07:30:52 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:03:27.056 07:30:52 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:27.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:27.883 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:27.883 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:27.883 07:30:53 -- common/autotest_common.sh@1527 -- # sleep 1 00:03:29.261 07:30:54 -- common/autotest_common.sh@1528 -- # bdfs=() 00:03:29.261 07:30:54 -- common/autotest_common.sh@1528 -- # local bdfs 00:03:29.261 07:30:54 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:03:29.262 07:30:54 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:03:29.262 07:30:54 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:29.262 07:30:54 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:29.262 07:30:54 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:29.262 07:30:54 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:29.262 07:30:54 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:29.262 07:30:54 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:03:29.262 07:30:54 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:29.262 07:30:54 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:29.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:29.521 Waiting for block devices as requested 00:03:29.521 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:29.521 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:03:29.521 07:30:55 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:29.521 07:30:55 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:29.521 07:30:55 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:03:29.521 07:30:55 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:03:29.521 07:30:55 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:03:29.521 07:30:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:29.521 07:30:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:29.521 07:30:55 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:29.521 07:30:55 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:29.521 07:30:55 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:29.521 07:30:55 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:29.521 07:30:55 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:29.521 07:30:55 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:29.521 07:30:55 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:29.521 07:30:55 -- common/autotest_common.sh@1552 -- # continue 00:03:29.521 07:30:55 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:29.521 07:30:55 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:03:29.521 07:30:55 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:29.521 07:30:55 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:03:29.521 07:30:55 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:29.521 07:30:55 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:03:29.521 07:30:55 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:29.779 07:30:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:03:29.779 07:30:55 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:03:29.779 07:30:55 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:03:29.779 07:30:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:29.779 07:30:55 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:29.779 07:30:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:29.779 07:30:55 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:29.779 07:30:55 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:29.779 07:30:55 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:29.779 07:30:55 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:03:29.779 07:30:55 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:29.779 07:30:55 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:29.779 07:30:55 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:29.779 07:30:55 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:29.779 07:30:55 -- common/autotest_common.sh@1552 -- # continue 00:03:29.779 07:30:55 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:03:29.779 07:30:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:29.779 07:30:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.779 07:30:55 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:03:29.779 07:30:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:29.779 07:30:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.779 07:30:55 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:30.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:30.604 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:30.605 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:30.605 07:30:56 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:03:30.605 07:30:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:30.605 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.605 07:30:56 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:03:30.605 07:30:56 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:03:30.605 07:30:56 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:03:30.605 07:30:56 -- common/autotest_common.sh@1572 -- # bdfs=() 00:03:30.605 07:30:56 -- common/autotest_common.sh@1572 -- # local bdfs 00:03:30.605 07:30:56 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:03:30.605 07:30:56 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:30.605 07:30:56 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:30.605 07:30:56 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:30.605 07:30:56 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:30.605 07:30:56 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:30.605 07:30:56 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:03:30.605 07:30:56 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:30.605 07:30:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:30.605 07:30:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:30.605 07:30:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:30.605 07:30:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:30.605 07:30:56 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:30.605 07:30:56 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:03:30.605 07:30:56 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:30.605 07:30:56 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:30.605 07:30:56 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:03:30.605 07:30:56 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:03:30.605 07:30:56 -- common/autotest_common.sh@1588 -- # return 0 00:03:30.605 07:30:56 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:03:30.605 07:30:56 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:03:30.605 07:30:56 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:30.605 07:30:56 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:30.605 07:30:56 -- spdk/autotest.sh@160 -- # timing_enter lib 00:03:30.605 07:30:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:30.605 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.605 07:30:56 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:30.605 07:30:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.605 07:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.605 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.864 ************************************ 00:03:30.864 START TEST env 00:03:30.864 ************************************ 00:03:30.864 07:30:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:30.864 * Looking for test storage... 00:03:30.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:30.864 07:30:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:30.864 07:30:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:30.864 07:30:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:30.864 07:30:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:30.864 07:30:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:30.864 07:30:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:30.864 07:30:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:30.864 07:30:56 -- scripts/common.sh@335 -- # IFS=.-: 00:03:30.864 07:30:56 -- scripts/common.sh@335 -- # read -ra ver1 00:03:30.864 07:30:56 -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.864 07:30:56 -- scripts/common.sh@336 -- # read -ra ver2 00:03:30.864 07:30:56 -- scripts/common.sh@337 -- # local 'op=<' 00:03:30.864 07:30:56 -- scripts/common.sh@339 -- # ver1_l=2 00:03:30.864 07:30:56 -- scripts/common.sh@340 -- # ver2_l=1 00:03:30.864 07:30:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:30.864 07:30:56 -- scripts/common.sh@343 -- # case "$op" in 00:03:30.864 07:30:56 -- scripts/common.sh@344 -- # : 1 00:03:30.864 07:30:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:30.864 07:30:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.864 07:30:56 -- scripts/common.sh@364 -- # decimal 1 00:03:30.864 07:30:56 -- scripts/common.sh@352 -- # local d=1 00:03:30.864 07:30:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.864 07:30:56 -- scripts/common.sh@354 -- # echo 1 00:03:30.864 07:30:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:30.864 07:30:56 -- scripts/common.sh@365 -- # decimal 2 00:03:30.865 07:30:56 -- scripts/common.sh@352 -- # local d=2 00:03:30.865 07:30:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.865 07:30:56 -- scripts/common.sh@354 -- # echo 2 00:03:30.865 07:30:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:30.865 07:30:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:30.865 07:30:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:30.865 07:30:56 -- scripts/common.sh@367 -- # return 0 00:03:30.865 07:30:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.865 07:30:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.865 --rc genhtml_branch_coverage=1 00:03:30.865 --rc genhtml_function_coverage=1 00:03:30.865 --rc genhtml_legend=1 00:03:30.865 --rc geninfo_all_blocks=1 00:03:30.865 --rc geninfo_unexecuted_blocks=1 00:03:30.865 00:03:30.865 ' 00:03:30.865 07:30:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.865 --rc genhtml_branch_coverage=1 00:03:30.865 --rc genhtml_function_coverage=1 00:03:30.865 --rc genhtml_legend=1 00:03:30.865 --rc geninfo_all_blocks=1 00:03:30.865 --rc geninfo_unexecuted_blocks=1 00:03:30.865 00:03:30.865 ' 00:03:30.865 07:30:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.865 --rc genhtml_branch_coverage=1 00:03:30.865 --rc genhtml_function_coverage=1 00:03:30.865 --rc genhtml_legend=1 00:03:30.865 --rc geninfo_all_blocks=1 00:03:30.865 --rc geninfo_unexecuted_blocks=1 00:03:30.865 00:03:30.865 ' 00:03:30.865 07:30:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:30.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.865 --rc genhtml_branch_coverage=1 00:03:30.865 --rc genhtml_function_coverage=1 00:03:30.865 --rc genhtml_legend=1 00:03:30.865 --rc geninfo_all_blocks=1 00:03:30.865 --rc geninfo_unexecuted_blocks=1 00:03:30.865 00:03:30.865 ' 00:03:30.865 07:30:56 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:30.865 07:30:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.865 07:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.865 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.865 ************************************ 00:03:30.865 START TEST env_memory 00:03:30.865 ************************************ 00:03:30.865 07:30:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:30.865 00:03:30.865 00:03:30.865 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.865 http://cunit.sourceforge.net/ 00:03:30.865 00:03:30.865 00:03:30.865 Suite: memory 00:03:30.865 Test: alloc and free memory map ...[2024-12-02 07:30:56.474982] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:31.124 passed 00:03:31.124 Test: mem map translation ...[2024-12-02 07:30:56.505658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:31.124 [2024-12-02 07:30:56.505694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:31.124 [2024-12-02 07:30:56.505749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:31.124 [2024-12-02 07:30:56.505759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:31.124 passed 00:03:31.124 Test: mem map registration ...[2024-12-02 07:30:56.569502] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:31.124 [2024-12-02 07:30:56.569537] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:31.124 passed 00:03:31.124 Test: mem map adjacent registrations ...passed 00:03:31.124 00:03:31.124 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.124 suites 1 1 n/a 0 0 00:03:31.124 tests 4 4 4 0 0 00:03:31.124 asserts 152 152 152 0 n/a 00:03:31.124 00:03:31.124 Elapsed time = 0.212 seconds 00:03:31.124 00:03:31.124 real 0m0.228s 00:03:31.124 user 0m0.211s 00:03:31.124 sys 0m0.013s 00:03:31.124 07:30:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:31.124 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:03:31.124 ************************************ 00:03:31.124 END TEST env_memory 00:03:31.124 ************************************ 00:03:31.124 07:30:56 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:31.124 07:30:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.124 07:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.124 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:03:31.124 ************************************ 00:03:31.124 START TEST env_vtophys 00:03:31.124 ************************************ 00:03:31.124 07:30:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:31.124 EAL: lib.eal log level changed from notice to debug 00:03:31.124 EAL: Detected lcore 0 as core 0 on socket 0 00:03:31.124 EAL: Detected lcore 1 as core 0 on socket 0 00:03:31.124 EAL: Detected lcore 2 as core 0 on socket 0 00:03:31.125 EAL: Detected lcore 3 as core 0 on socket 0 00:03:31.125 EAL: Detected lcore 4 as core 0 on socket 0 00:03:31.125 EAL: Detected lcore 5 as core 0 on socket 0 00:03:31.125 EAL: Detected lcore 6 as core 0 on socket 0 00:03:31.125 EAL: Detected lcore 7 as core 0 on socket 0 00:03:31.125 EAL: Detected lcore 8 as core 0 on socket 0 00:03:31.125 EAL: Detected lcore 9 as core 0 on socket 0 00:03:31.125 EAL: Maximum logical cores by configuration: 128 00:03:31.125 EAL: Detected CPU lcores: 10 00:03:31.125 EAL: Detected NUMA nodes: 1 00:03:31.125 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:31.125 EAL: Detected shared linkage of DPDK 00:03:31.125 EAL: No shared files mode enabled, IPC will be disabled 00:03:31.125 EAL: Selected IOVA mode 'PA' 00:03:31.125 EAL: Probing VFIO support... 00:03:31.125 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:31.125 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:31.125 EAL: Ask a virtual area of 0x2e000 bytes 00:03:31.125 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:31.125 EAL: Setting up physically contiguous memory... 00:03:31.125 EAL: Setting maximum number of open files to 524288 00:03:31.125 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:31.125 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:31.125 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.125 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:31.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.125 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.125 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:31.125 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:31.125 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.125 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:31.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.125 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.125 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:31.125 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:31.125 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.125 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:31.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.125 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.125 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:31.125 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:31.125 EAL: Ask a virtual area of 0x61000 bytes 00:03:31.125 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:31.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:31.125 EAL: Ask a virtual area of 0x400000000 bytes 00:03:31.125 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:31.125 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:31.125 EAL: Hugepages will be freed exactly as allocated. 00:03:31.125 EAL: No shared files mode enabled, IPC is disabled 00:03:31.125 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: TSC frequency is ~2200000 KHz 00:03:31.410 EAL: Main lcore 0 is ready (tid=7f811ca90a00;cpuset=[0]) 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 0 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 2MB 00:03:31.410 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:31.410 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:31.410 EAL: Mem event callback 'spdk:(nil)' registered 00:03:31.410 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:31.410 00:03:31.410 00:03:31.410 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.410 http://cunit.sourceforge.net/ 00:03:31.410 00:03:31.410 00:03:31.410 Suite: components_suite 00:03:31.410 Test: vtophys_malloc_test ...passed 00:03:31.410 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 4MB 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was shrunk by 4MB 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 6MB 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was shrunk by 6MB 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 10MB 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was shrunk by 10MB 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 18MB 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was shrunk by 18MB 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 34MB 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was shrunk by 34MB 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 66MB 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was shrunk by 66MB 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 130MB 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was shrunk by 130MB 00:03:31.410 EAL: Trying to obtain current memory policy. 00:03:31.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.410 EAL: Restoring previous memory policy: 4 00:03:31.410 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.410 EAL: request: mp_malloc_sync 00:03:31.410 EAL: No shared files mode enabled, IPC is disabled 00:03:31.410 EAL: Heap on socket 0 was expanded by 258MB 00:03:31.705 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.705 EAL: request: mp_malloc_sync 00:03:31.705 EAL: No shared files mode enabled, IPC is disabled 00:03:31.705 EAL: Heap on socket 0 was shrunk by 258MB 00:03:31.705 EAL: Trying to obtain current memory policy. 00:03:31.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.705 EAL: Restoring previous memory policy: 4 00:03:31.705 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.705 EAL: request: mp_malloc_sync 00:03:31.705 EAL: No shared files mode enabled, IPC is disabled 00:03:31.705 EAL: Heap on socket 0 was expanded by 514MB 00:03:31.705 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.705 EAL: request: mp_malloc_sync 00:03:31.705 EAL: No shared files mode enabled, IPC is disabled 00:03:31.705 EAL: Heap on socket 0 was shrunk by 514MB 00:03:31.705 EAL: Trying to obtain current memory policy. 00:03:31.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:31.968 EAL: Restoring previous memory policy: 4 00:03:31.968 EAL: Calling mem event callback 'spdk:(nil)' 00:03:31.968 EAL: request: mp_malloc_sync 00:03:31.968 EAL: No shared files mode enabled, IPC is disabled 00:03:31.968 EAL: Heap on socket 0 was expanded by 1026MB 00:03:31.968 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.227 passed 00:03:32.227 00:03:32.227 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.227 suites 1 1 n/a 0 0 00:03:32.227 tests 2 2 2 0 0 00:03:32.227 asserts 5344 5344 5344 0 n/a 00:03:32.227 00:03:32.227 Elapsed time = 0.734 seconds 00:03:32.227 EAL: request: mp_malloc_sync 00:03:32.227 EAL: No shared files mode enabled, IPC is disabled 00:03:32.227 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:32.227 EAL: Calling mem event callback 'spdk:(nil)' 00:03:32.227 EAL: request: mp_malloc_sync 00:03:32.227 EAL: No shared files mode enabled, IPC is disabled 00:03:32.227 EAL: Heap on socket 0 was shrunk by 2MB 00:03:32.227 EAL: No shared files mode enabled, IPC is disabled 00:03:32.227 EAL: No shared files mode enabled, IPC is disabled 00:03:32.227 EAL: No shared files mode enabled, IPC is disabled 00:03:32.227 00:03:32.227 real 0m0.932s 00:03:32.227 user 0m0.484s 00:03:32.227 sys 0m0.313s 00:03:32.227 07:30:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.227 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:32.227 ************************************ 00:03:32.227 END TEST env_vtophys 00:03:32.227 ************************************ 00:03:32.227 07:30:57 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:32.227 07:30:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.227 07:30:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.227 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:32.227 ************************************ 00:03:32.227 START TEST env_pci 00:03:32.227 ************************************ 00:03:32.227 07:30:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:32.227 00:03:32.227 00:03:32.227 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.227 http://cunit.sourceforge.net/ 00:03:32.227 00:03:32.227 00:03:32.227 Suite: pci 00:03:32.227 Test: pci_hook ...[2024-12-02 07:30:57.708346] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53676 has claimed it 00:03:32.227 passed 00:03:32.227 00:03:32.227 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.227 suites 1 1 n/a 0 0 00:03:32.227 tests 1 1 1 0 0 00:03:32.227 asserts 25 25 25 0 n/a 00:03:32.227 00:03:32.227 Elapsed time = 0.002 seconds 00:03:32.227 EAL: Cannot find device (10000:00:01.0) 00:03:32.227 EAL: Failed to attach device on primary process 00:03:32.227 ************************************ 00:03:32.227 END TEST env_pci 00:03:32.227 ************************************ 00:03:32.227 00:03:32.227 real 0m0.023s 00:03:32.227 user 0m0.013s 00:03:32.227 sys 0m0.009s 00:03:32.227 07:30:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.227 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:32.227 07:30:57 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:32.227 07:30:57 -- env/env.sh@15 -- # uname 00:03:32.227 07:30:57 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:32.227 07:30:57 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:32.227 07:30:57 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:32.227 07:30:57 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:32.227 07:30:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.227 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:32.227 ************************************ 00:03:32.227 START TEST env_dpdk_post_init 00:03:32.227 ************************************ 00:03:32.227 07:30:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:32.227 EAL: Detected CPU lcores: 10 00:03:32.227 EAL: Detected NUMA nodes: 1 00:03:32.227 EAL: Detected shared linkage of DPDK 00:03:32.227 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:32.227 EAL: Selected IOVA mode 'PA' 00:03:32.486 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:32.486 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:32.487 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:03:32.487 Starting DPDK initialization... 00:03:32.487 Starting SPDK post initialization... 00:03:32.487 SPDK NVMe probe 00:03:32.487 Attaching to 0000:00:06.0 00:03:32.487 Attaching to 0000:00:07.0 00:03:32.487 Attached to 0000:00:06.0 00:03:32.487 Attached to 0000:00:07.0 00:03:32.487 Cleaning up... 00:03:32.487 00:03:32.487 real 0m0.174s 00:03:32.487 user 0m0.039s 00:03:32.487 sys 0m0.036s 00:03:32.487 07:30:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.487 ************************************ 00:03:32.487 END TEST env_dpdk_post_init 00:03:32.487 ************************************ 00:03:32.487 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:32.487 07:30:57 -- env/env.sh@26 -- # uname 00:03:32.487 07:30:57 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:32.487 07:30:57 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:32.487 07:30:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.487 07:30:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.487 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:03:32.487 ************************************ 00:03:32.487 START TEST env_mem_callbacks 00:03:32.487 ************************************ 00:03:32.487 07:30:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:32.487 EAL: Detected CPU lcores: 10 00:03:32.487 EAL: Detected NUMA nodes: 1 00:03:32.487 EAL: Detected shared linkage of DPDK 00:03:32.487 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:32.487 EAL: Selected IOVA mode 'PA' 00:03:32.746 00:03:32.746 00:03:32.746 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.746 http://cunit.sourceforge.net/ 00:03:32.746 00:03:32.746 00:03:32.746 Suite: memory 00:03:32.746 Test: test ... 00:03:32.746 register 0x200000200000 2097152 00:03:32.746 malloc 3145728 00:03:32.746 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:32.746 register 0x200000400000 4194304 00:03:32.746 buf 0x200000500000 len 3145728 PASSED 00:03:32.746 malloc 64 00:03:32.746 buf 0x2000004fff40 len 64 PASSED 00:03:32.746 malloc 4194304 00:03:32.746 register 0x200000800000 6291456 00:03:32.746 buf 0x200000a00000 len 4194304 PASSED 00:03:32.746 free 0x200000500000 3145728 00:03:32.746 free 0x2000004fff40 64 00:03:32.746 unregister 0x200000400000 4194304 PASSED 00:03:32.746 free 0x200000a00000 4194304 00:03:32.746 unregister 0x200000800000 6291456 PASSED 00:03:32.746 malloc 8388608 00:03:32.746 register 0x200000400000 10485760 00:03:32.746 buf 0x200000600000 len 8388608 PASSED 00:03:32.746 free 0x200000600000 8388608 00:03:32.746 unregister 0x200000400000 10485760 PASSED 00:03:32.746 passed 00:03:32.746 00:03:32.746 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.746 suites 1 1 n/a 0 0 00:03:32.746 tests 1 1 1 0 0 00:03:32.746 asserts 15 15 15 0 n/a 00:03:32.746 00:03:32.746 Elapsed time = 0.009 seconds 00:03:32.746 00:03:32.747 real 0m0.143s 00:03:32.747 user 0m0.018s 00:03:32.747 sys 0m0.022s 00:03:32.747 07:30:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.747 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:03:32.747 ************************************ 00:03:32.747 END TEST env_mem_callbacks 00:03:32.747 ************************************ 00:03:32.747 ************************************ 00:03:32.747 END TEST env 00:03:32.747 ************************************ 00:03:32.747 00:03:32.747 real 0m1.964s 00:03:32.747 user 0m0.953s 00:03:32.747 sys 0m0.646s 00:03:32.747 07:30:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.747 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:03:32.747 07:30:58 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:32.747 07:30:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.747 07:30:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.747 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:03:32.747 ************************************ 00:03:32.747 START TEST rpc 00:03:32.747 ************************************ 00:03:32.747 07:30:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:32.747 * Looking for test storage... 00:03:32.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:32.747 07:30:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:32.747 07:30:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:32.747 07:30:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:33.006 07:30:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:33.006 07:30:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:33.006 07:30:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:33.006 07:30:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:33.006 07:30:58 -- scripts/common.sh@335 -- # IFS=.-: 00:03:33.006 07:30:58 -- scripts/common.sh@335 -- # read -ra ver1 00:03:33.006 07:30:58 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.006 07:30:58 -- scripts/common.sh@336 -- # read -ra ver2 00:03:33.006 07:30:58 -- scripts/common.sh@337 -- # local 'op=<' 00:03:33.006 07:30:58 -- scripts/common.sh@339 -- # ver1_l=2 00:03:33.006 07:30:58 -- scripts/common.sh@340 -- # ver2_l=1 00:03:33.006 07:30:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:33.006 07:30:58 -- scripts/common.sh@343 -- # case "$op" in 00:03:33.006 07:30:58 -- scripts/common.sh@344 -- # : 1 00:03:33.006 07:30:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:33.006 07:30:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.006 07:30:58 -- scripts/common.sh@364 -- # decimal 1 00:03:33.006 07:30:58 -- scripts/common.sh@352 -- # local d=1 00:03:33.006 07:30:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.006 07:30:58 -- scripts/common.sh@354 -- # echo 1 00:03:33.006 07:30:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:33.006 07:30:58 -- scripts/common.sh@365 -- # decimal 2 00:03:33.006 07:30:58 -- scripts/common.sh@352 -- # local d=2 00:03:33.006 07:30:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.006 07:30:58 -- scripts/common.sh@354 -- # echo 2 00:03:33.006 07:30:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:33.006 07:30:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:33.006 07:30:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:33.006 07:30:58 -- scripts/common.sh@367 -- # return 0 00:03:33.006 07:30:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.006 07:30:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:33.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.006 --rc genhtml_branch_coverage=1 00:03:33.006 --rc genhtml_function_coverage=1 00:03:33.006 --rc genhtml_legend=1 00:03:33.007 --rc geninfo_all_blocks=1 00:03:33.007 --rc geninfo_unexecuted_blocks=1 00:03:33.007 00:03:33.007 ' 00:03:33.007 07:30:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:33.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.007 --rc genhtml_branch_coverage=1 00:03:33.007 --rc genhtml_function_coverage=1 00:03:33.007 --rc genhtml_legend=1 00:03:33.007 --rc geninfo_all_blocks=1 00:03:33.007 --rc geninfo_unexecuted_blocks=1 00:03:33.007 00:03:33.007 ' 00:03:33.007 07:30:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:33.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.007 --rc genhtml_branch_coverage=1 00:03:33.007 --rc genhtml_function_coverage=1 00:03:33.007 --rc genhtml_legend=1 00:03:33.007 --rc geninfo_all_blocks=1 00:03:33.007 --rc geninfo_unexecuted_blocks=1 00:03:33.007 00:03:33.007 ' 00:03:33.007 07:30:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:33.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.007 --rc genhtml_branch_coverage=1 00:03:33.007 --rc genhtml_function_coverage=1 00:03:33.007 --rc genhtml_legend=1 00:03:33.007 --rc geninfo_all_blocks=1 00:03:33.007 --rc geninfo_unexecuted_blocks=1 00:03:33.007 00:03:33.007 ' 00:03:33.007 07:30:58 -- rpc/rpc.sh@65 -- # spdk_pid=53794 00:03:33.007 07:30:58 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:33.007 07:30:58 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.007 07:30:58 -- rpc/rpc.sh@67 -- # waitforlisten 53794 00:03:33.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.007 07:30:58 -- common/autotest_common.sh@829 -- # '[' -z 53794 ']' 00:03:33.007 07:30:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.007 07:30:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:33.007 07:30:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.007 07:30:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:33.007 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:03:33.007 [2024-12-02 07:30:58.506060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:33.007 [2024-12-02 07:30:58.506341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53794 ] 00:03:33.266 [2024-12-02 07:30:58.636321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.266 [2024-12-02 07:30:58.686410] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:33.266 [2024-12-02 07:30:58.686811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:33.266 [2024-12-02 07:30:58.686835] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53794' to capture a snapshot of events at runtime. 00:03:33.266 [2024-12-02 07:30:58.686844] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53794 for offline analysis/debug. 00:03:33.266 [2024-12-02 07:30:58.686876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.203 07:30:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:34.203 07:30:59 -- common/autotest_common.sh@862 -- # return 0 00:03:34.203 07:30:59 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:34.203 07:30:59 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:34.203 07:30:59 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:34.203 07:30:59 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:34.203 07:30:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.203 07:30:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.203 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.203 ************************************ 00:03:34.203 START TEST rpc_integrity 00:03:34.203 ************************************ 00:03:34.203 07:30:59 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:34.203 07:30:59 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.203 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.203 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.203 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.203 07:30:59 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.203 07:30:59 -- rpc/rpc.sh@13 -- # jq length 00:03:34.203 07:30:59 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.203 07:30:59 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.203 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.203 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.203 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.203 07:30:59 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:34.203 07:30:59 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.203 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.203 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.203 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.203 07:30:59 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.203 { 00:03:34.203 "name": "Malloc0", 00:03:34.203 "aliases": [ 00:03:34.203 "83db402b-2053-41f6-a043-2fedef85d3ed" 00:03:34.203 ], 00:03:34.203 "product_name": "Malloc disk", 00:03:34.203 "block_size": 512, 00:03:34.203 "num_blocks": 16384, 00:03:34.203 "uuid": "83db402b-2053-41f6-a043-2fedef85d3ed", 00:03:34.203 "assigned_rate_limits": { 00:03:34.203 "rw_ios_per_sec": 0, 00:03:34.203 "rw_mbytes_per_sec": 0, 00:03:34.203 "r_mbytes_per_sec": 0, 00:03:34.203 "w_mbytes_per_sec": 0 00:03:34.203 }, 00:03:34.203 "claimed": false, 00:03:34.203 "zoned": false, 00:03:34.203 "supported_io_types": { 00:03:34.203 "read": true, 00:03:34.204 "write": true, 00:03:34.204 "unmap": true, 00:03:34.204 "write_zeroes": true, 00:03:34.204 "flush": true, 00:03:34.204 "reset": true, 00:03:34.204 "compare": false, 00:03:34.204 "compare_and_write": false, 00:03:34.204 "abort": true, 00:03:34.204 "nvme_admin": false, 00:03:34.204 "nvme_io": false 00:03:34.204 }, 00:03:34.204 "memory_domains": [ 00:03:34.204 { 00:03:34.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.204 "dma_device_type": 2 00:03:34.204 } 00:03:34.204 ], 00:03:34.204 "driver_specific": {} 00:03:34.204 } 00:03:34.204 ]' 00:03:34.204 07:30:59 -- rpc/rpc.sh@17 -- # jq length 00:03:34.204 07:30:59 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.204 07:30:59 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:34.204 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.204 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.204 [2024-12-02 07:30:59.682395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:34.204 [2024-12-02 07:30:59.682455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.204 [2024-12-02 07:30:59.682472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12dc4c0 00:03:34.204 [2024-12-02 07:30:59.682481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.204 [2024-12-02 07:30:59.683897] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.204 [2024-12-02 07:30:59.683931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.204 Passthru0 00:03:34.204 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.204 07:30:59 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.204 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.204 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.204 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.204 07:30:59 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.204 { 00:03:34.204 "name": "Malloc0", 00:03:34.204 "aliases": [ 00:03:34.204 "83db402b-2053-41f6-a043-2fedef85d3ed" 00:03:34.204 ], 00:03:34.204 "product_name": "Malloc disk", 00:03:34.204 "block_size": 512, 00:03:34.204 "num_blocks": 16384, 00:03:34.204 "uuid": "83db402b-2053-41f6-a043-2fedef85d3ed", 00:03:34.204 "assigned_rate_limits": { 00:03:34.204 "rw_ios_per_sec": 0, 00:03:34.204 "rw_mbytes_per_sec": 0, 00:03:34.204 "r_mbytes_per_sec": 0, 00:03:34.204 "w_mbytes_per_sec": 0 00:03:34.204 }, 00:03:34.204 "claimed": true, 00:03:34.204 "claim_type": "exclusive_write", 00:03:34.204 "zoned": false, 00:03:34.204 "supported_io_types": { 00:03:34.204 "read": true, 00:03:34.204 "write": true, 00:03:34.204 "unmap": true, 00:03:34.204 "write_zeroes": true, 00:03:34.204 "flush": true, 00:03:34.204 "reset": true, 00:03:34.204 "compare": false, 00:03:34.204 "compare_and_write": false, 00:03:34.204 "abort": true, 00:03:34.204 "nvme_admin": false, 00:03:34.204 "nvme_io": false 00:03:34.204 }, 00:03:34.204 "memory_domains": [ 00:03:34.204 { 00:03:34.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.204 "dma_device_type": 2 00:03:34.204 } 00:03:34.204 ], 00:03:34.204 "driver_specific": {} 00:03:34.204 }, 00:03:34.204 { 00:03:34.204 "name": "Passthru0", 00:03:34.204 "aliases": [ 00:03:34.204 "f13f6433-6008-59a1-9c62-696b286d7ce6" 00:03:34.204 ], 00:03:34.204 "product_name": "passthru", 00:03:34.204 "block_size": 512, 00:03:34.204 "num_blocks": 16384, 00:03:34.204 "uuid": "f13f6433-6008-59a1-9c62-696b286d7ce6", 00:03:34.204 "assigned_rate_limits": { 00:03:34.204 "rw_ios_per_sec": 0, 00:03:34.204 "rw_mbytes_per_sec": 0, 00:03:34.204 "r_mbytes_per_sec": 0, 00:03:34.204 "w_mbytes_per_sec": 0 00:03:34.204 }, 00:03:34.204 "claimed": false, 00:03:34.204 "zoned": false, 00:03:34.204 "supported_io_types": { 00:03:34.204 "read": true, 00:03:34.204 "write": true, 00:03:34.204 "unmap": true, 00:03:34.204 "write_zeroes": true, 00:03:34.204 "flush": true, 00:03:34.204 "reset": true, 00:03:34.204 "compare": false, 00:03:34.204 "compare_and_write": false, 00:03:34.204 "abort": true, 00:03:34.204 "nvme_admin": false, 00:03:34.204 "nvme_io": false 00:03:34.204 }, 00:03:34.204 "memory_domains": [ 00:03:34.204 { 00:03:34.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.204 "dma_device_type": 2 00:03:34.204 } 00:03:34.204 ], 00:03:34.204 "driver_specific": { 00:03:34.204 "passthru": { 00:03:34.204 "name": "Passthru0", 00:03:34.204 "base_bdev_name": "Malloc0" 00:03:34.204 } 00:03:34.204 } 00:03:34.204 } 00:03:34.204 ]' 00:03:34.204 07:30:59 -- rpc/rpc.sh@21 -- # jq length 00:03:34.204 07:30:59 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.204 07:30:59 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.204 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.204 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.204 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.204 07:30:59 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:34.204 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.204 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.204 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.204 07:30:59 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.204 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.204 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.204 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.204 07:30:59 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.204 07:30:59 -- rpc/rpc.sh@26 -- # jq length 00:03:34.463 07:30:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.463 00:03:34.463 real 0m0.308s 00:03:34.463 ************************************ 00:03:34.463 END TEST rpc_integrity 00:03:34.463 ************************************ 00:03:34.463 user 0m0.201s 00:03:34.463 sys 0m0.039s 00:03:34.463 07:30:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.463 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.463 07:30:59 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:34.463 07:30:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.463 07:30:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.463 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.463 ************************************ 00:03:34.463 START TEST rpc_plugins 00:03:34.463 ************************************ 00:03:34.463 07:30:59 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:03:34.463 07:30:59 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.463 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.463 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.463 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.463 07:30:59 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.463 07:30:59 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.463 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.463 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.463 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.463 07:30:59 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.463 { 00:03:34.463 "name": "Malloc1", 00:03:34.463 "aliases": [ 00:03:34.463 "f10af3ba-9d5d-43e4-89e1-3e700ca35508" 00:03:34.463 ], 00:03:34.463 "product_name": "Malloc disk", 00:03:34.463 "block_size": 4096, 00:03:34.463 "num_blocks": 256, 00:03:34.463 "uuid": "f10af3ba-9d5d-43e4-89e1-3e700ca35508", 00:03:34.463 "assigned_rate_limits": { 00:03:34.463 "rw_ios_per_sec": 0, 00:03:34.463 "rw_mbytes_per_sec": 0, 00:03:34.463 "r_mbytes_per_sec": 0, 00:03:34.463 "w_mbytes_per_sec": 0 00:03:34.463 }, 00:03:34.463 "claimed": false, 00:03:34.463 "zoned": false, 00:03:34.463 "supported_io_types": { 00:03:34.463 "read": true, 00:03:34.463 "write": true, 00:03:34.463 "unmap": true, 00:03:34.463 "write_zeroes": true, 00:03:34.463 "flush": true, 00:03:34.463 "reset": true, 00:03:34.463 "compare": false, 00:03:34.463 "compare_and_write": false, 00:03:34.463 "abort": true, 00:03:34.463 "nvme_admin": false, 00:03:34.463 "nvme_io": false 00:03:34.463 }, 00:03:34.463 "memory_domains": [ 00:03:34.463 { 00:03:34.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.463 "dma_device_type": 2 00:03:34.463 } 00:03:34.463 ], 00:03:34.463 "driver_specific": {} 00:03:34.463 } 00:03:34.463 ]' 00:03:34.463 07:30:59 -- rpc/rpc.sh@32 -- # jq length 00:03:34.463 07:30:59 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.463 07:30:59 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.464 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.464 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.464 07:30:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.464 07:30:59 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.464 07:30:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.464 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:34.464 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.464 07:31:00 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.464 07:31:00 -- rpc/rpc.sh@36 -- # jq length 00:03:34.464 ************************************ 00:03:34.464 END TEST rpc_plugins 00:03:34.464 ************************************ 00:03:34.464 07:31:00 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.464 00:03:34.464 real 0m0.158s 00:03:34.464 user 0m0.106s 00:03:34.464 sys 0m0.013s 00:03:34.464 07:31:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.464 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.723 07:31:00 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.723 07:31:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.723 07:31:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.723 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.723 ************************************ 00:03:34.723 START TEST rpc_trace_cmd_test 00:03:34.723 ************************************ 00:03:34.723 07:31:00 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:03:34.723 07:31:00 -- rpc/rpc.sh@40 -- # local info 00:03:34.723 07:31:00 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.723 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.723 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.723 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.723 07:31:00 -- rpc/rpc.sh@42 -- # info='{ 00:03:34.723 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53794", 00:03:34.723 "tpoint_group_mask": "0x8", 00:03:34.723 "iscsi_conn": { 00:03:34.723 "mask": "0x2", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "scsi": { 00:03:34.723 "mask": "0x4", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "bdev": { 00:03:34.723 "mask": "0x8", 00:03:34.723 "tpoint_mask": "0xffffffffffffffff" 00:03:34.723 }, 00:03:34.723 "nvmf_rdma": { 00:03:34.723 "mask": "0x10", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "nvmf_tcp": { 00:03:34.723 "mask": "0x20", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "ftl": { 00:03:34.723 "mask": "0x40", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "blobfs": { 00:03:34.723 "mask": "0x80", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "dsa": { 00:03:34.723 "mask": "0x200", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "thread": { 00:03:34.723 "mask": "0x400", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "nvme_pcie": { 00:03:34.723 "mask": "0x800", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "iaa": { 00:03:34.723 "mask": "0x1000", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "nvme_tcp": { 00:03:34.723 "mask": "0x2000", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 }, 00:03:34.723 "bdev_nvme": { 00:03:34.723 "mask": "0x4000", 00:03:34.723 "tpoint_mask": "0x0" 00:03:34.723 } 00:03:34.723 }' 00:03:34.723 07:31:00 -- rpc/rpc.sh@43 -- # jq length 00:03:34.723 07:31:00 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:34.723 07:31:00 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.723 07:31:00 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.723 07:31:00 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:34.723 07:31:00 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:34.723 07:31:00 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:34.723 07:31:00 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:34.723 07:31:00 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:34.982 07:31:00 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:34.982 00:03:34.982 real 0m0.270s 00:03:34.982 user 0m0.236s 00:03:34.982 sys 0m0.025s 00:03:34.982 07:31:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.982 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.982 ************************************ 00:03:34.982 END TEST rpc_trace_cmd_test 00:03:34.982 ************************************ 00:03:34.982 07:31:00 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:34.982 07:31:00 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:34.982 07:31:00 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:34.982 07:31:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.982 07:31:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.982 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.982 ************************************ 00:03:34.982 START TEST rpc_daemon_integrity 00:03:34.982 ************************************ 00:03:34.982 07:31:00 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:34.982 07:31:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.982 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.982 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.982 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.982 07:31:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.982 07:31:00 -- rpc/rpc.sh@13 -- # jq length 00:03:34.982 07:31:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.982 07:31:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.982 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.982 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.982 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.982 07:31:00 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:34.982 07:31:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.982 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.982 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.982 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.982 07:31:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.982 { 00:03:34.982 "name": "Malloc2", 00:03:34.982 "aliases": [ 00:03:34.982 "65171153-ea47-471d-8f40-e8e4af0aa033" 00:03:34.982 ], 00:03:34.982 "product_name": "Malloc disk", 00:03:34.982 "block_size": 512, 00:03:34.982 "num_blocks": 16384, 00:03:34.982 "uuid": "65171153-ea47-471d-8f40-e8e4af0aa033", 00:03:34.982 "assigned_rate_limits": { 00:03:34.982 "rw_ios_per_sec": 0, 00:03:34.982 "rw_mbytes_per_sec": 0, 00:03:34.982 "r_mbytes_per_sec": 0, 00:03:34.982 "w_mbytes_per_sec": 0 00:03:34.982 }, 00:03:34.982 "claimed": false, 00:03:34.982 "zoned": false, 00:03:34.982 "supported_io_types": { 00:03:34.982 "read": true, 00:03:34.982 "write": true, 00:03:34.982 "unmap": true, 00:03:34.982 "write_zeroes": true, 00:03:34.982 "flush": true, 00:03:34.982 "reset": true, 00:03:34.982 "compare": false, 00:03:34.982 "compare_and_write": false, 00:03:34.982 "abort": true, 00:03:34.982 "nvme_admin": false, 00:03:34.982 "nvme_io": false 00:03:34.982 }, 00:03:34.982 "memory_domains": [ 00:03:34.982 { 00:03:34.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.982 "dma_device_type": 2 00:03:34.982 } 00:03:34.982 ], 00:03:34.982 "driver_specific": {} 00:03:34.982 } 00:03:34.982 ]' 00:03:34.982 07:31:00 -- rpc/rpc.sh@17 -- # jq length 00:03:34.982 07:31:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.982 07:31:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:34.982 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.982 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:34.982 [2024-12-02 07:31:00.578710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:34.982 [2024-12-02 07:31:00.578903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.982 [2024-12-02 07:31:00.578958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12dcc40 00:03:34.982 [2024-12-02 07:31:00.578969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.982 [2024-12-02 07:31:00.580276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.982 [2024-12-02 07:31:00.580349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.982 Passthru0 00:03:34.982 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.982 07:31:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.982 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.982 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.242 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:35.242 07:31:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:35.242 { 00:03:35.242 "name": "Malloc2", 00:03:35.242 "aliases": [ 00:03:35.242 "65171153-ea47-471d-8f40-e8e4af0aa033" 00:03:35.242 ], 00:03:35.242 "product_name": "Malloc disk", 00:03:35.242 "block_size": 512, 00:03:35.242 "num_blocks": 16384, 00:03:35.242 "uuid": "65171153-ea47-471d-8f40-e8e4af0aa033", 00:03:35.242 "assigned_rate_limits": { 00:03:35.242 "rw_ios_per_sec": 0, 00:03:35.242 "rw_mbytes_per_sec": 0, 00:03:35.242 "r_mbytes_per_sec": 0, 00:03:35.242 "w_mbytes_per_sec": 0 00:03:35.242 }, 00:03:35.242 "claimed": true, 00:03:35.242 "claim_type": "exclusive_write", 00:03:35.242 "zoned": false, 00:03:35.242 "supported_io_types": { 00:03:35.242 "read": true, 00:03:35.242 "write": true, 00:03:35.242 "unmap": true, 00:03:35.242 "write_zeroes": true, 00:03:35.242 "flush": true, 00:03:35.242 "reset": true, 00:03:35.242 "compare": false, 00:03:35.242 "compare_and_write": false, 00:03:35.242 "abort": true, 00:03:35.242 "nvme_admin": false, 00:03:35.242 "nvme_io": false 00:03:35.242 }, 00:03:35.242 "memory_domains": [ 00:03:35.242 { 00:03:35.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.242 "dma_device_type": 2 00:03:35.242 } 00:03:35.242 ], 00:03:35.242 "driver_specific": {} 00:03:35.242 }, 00:03:35.242 { 00:03:35.242 "name": "Passthru0", 00:03:35.242 "aliases": [ 00:03:35.242 "083c315a-a18e-55c2-a7b2-485083a81fa4" 00:03:35.242 ], 00:03:35.242 "product_name": "passthru", 00:03:35.242 "block_size": 512, 00:03:35.242 "num_blocks": 16384, 00:03:35.242 "uuid": "083c315a-a18e-55c2-a7b2-485083a81fa4", 00:03:35.242 "assigned_rate_limits": { 00:03:35.242 "rw_ios_per_sec": 0, 00:03:35.242 "rw_mbytes_per_sec": 0, 00:03:35.242 "r_mbytes_per_sec": 0, 00:03:35.242 "w_mbytes_per_sec": 0 00:03:35.242 }, 00:03:35.242 "claimed": false, 00:03:35.242 "zoned": false, 00:03:35.242 "supported_io_types": { 00:03:35.242 "read": true, 00:03:35.242 "write": true, 00:03:35.242 "unmap": true, 00:03:35.242 "write_zeroes": true, 00:03:35.242 "flush": true, 00:03:35.242 "reset": true, 00:03:35.242 "compare": false, 00:03:35.242 "compare_and_write": false, 00:03:35.242 "abort": true, 00:03:35.242 "nvme_admin": false, 00:03:35.242 "nvme_io": false 00:03:35.242 }, 00:03:35.242 "memory_domains": [ 00:03:35.242 { 00:03:35.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:35.242 "dma_device_type": 2 00:03:35.242 } 00:03:35.242 ], 00:03:35.242 "driver_specific": { 00:03:35.242 "passthru": { 00:03:35.242 "name": "Passthru0", 00:03:35.242 "base_bdev_name": "Malloc2" 00:03:35.242 } 00:03:35.242 } 00:03:35.242 } 00:03:35.242 ]' 00:03:35.242 07:31:00 -- rpc/rpc.sh@21 -- # jq length 00:03:35.242 07:31:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:35.242 07:31:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:35.242 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:35.242 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.242 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:35.242 07:31:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:35.242 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:35.242 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.242 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:35.242 07:31:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:35.242 07:31:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:35.242 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.242 07:31:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:35.242 07:31:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:35.242 07:31:00 -- rpc/rpc.sh@26 -- # jq length 00:03:35.242 ************************************ 00:03:35.242 END TEST rpc_daemon_integrity 00:03:35.242 ************************************ 00:03:35.242 07:31:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:35.242 00:03:35.242 real 0m0.316s 00:03:35.242 user 0m0.213s 00:03:35.242 sys 0m0.038s 00:03:35.242 07:31:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:35.242 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.242 07:31:00 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:35.242 07:31:00 -- rpc/rpc.sh@84 -- # killprocess 53794 00:03:35.242 07:31:00 -- common/autotest_common.sh@936 -- # '[' -z 53794 ']' 00:03:35.242 07:31:00 -- common/autotest_common.sh@940 -- # kill -0 53794 00:03:35.242 07:31:00 -- common/autotest_common.sh@941 -- # uname 00:03:35.242 07:31:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:35.242 07:31:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 53794 00:03:35.242 07:31:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:35.242 07:31:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:35.242 killing process with pid 53794 00:03:35.242 07:31:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 53794' 00:03:35.242 07:31:00 -- common/autotest_common.sh@955 -- # kill 53794 00:03:35.242 07:31:00 -- common/autotest_common.sh@960 -- # wait 53794 00:03:35.502 00:03:35.502 real 0m2.811s 00:03:35.502 user 0m3.797s 00:03:35.502 sys 0m0.543s 00:03:35.502 07:31:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:35.502 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:03:35.502 ************************************ 00:03:35.502 END TEST rpc 00:03:35.502 ************************************ 00:03:35.502 07:31:01 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:35.502 07:31:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.502 07:31:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.502 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:03:35.502 ************************************ 00:03:35.502 START TEST rpc_client 00:03:35.502 ************************************ 00:03:35.502 07:31:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:35.761 * Looking for test storage... 00:03:35.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:35.761 07:31:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:35.761 07:31:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:35.761 07:31:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:35.762 07:31:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:35.762 07:31:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:35.762 07:31:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:35.762 07:31:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:35.762 07:31:01 -- scripts/common.sh@335 -- # IFS=.-: 00:03:35.762 07:31:01 -- scripts/common.sh@335 -- # read -ra ver1 00:03:35.762 07:31:01 -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.762 07:31:01 -- scripts/common.sh@336 -- # read -ra ver2 00:03:35.762 07:31:01 -- scripts/common.sh@337 -- # local 'op=<' 00:03:35.762 07:31:01 -- scripts/common.sh@339 -- # ver1_l=2 00:03:35.762 07:31:01 -- scripts/common.sh@340 -- # ver2_l=1 00:03:35.762 07:31:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:35.762 07:31:01 -- scripts/common.sh@343 -- # case "$op" in 00:03:35.762 07:31:01 -- scripts/common.sh@344 -- # : 1 00:03:35.762 07:31:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:35.762 07:31:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.762 07:31:01 -- scripts/common.sh@364 -- # decimal 1 00:03:35.762 07:31:01 -- scripts/common.sh@352 -- # local d=1 00:03:35.762 07:31:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.762 07:31:01 -- scripts/common.sh@354 -- # echo 1 00:03:35.762 07:31:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:35.762 07:31:01 -- scripts/common.sh@365 -- # decimal 2 00:03:35.762 07:31:01 -- scripts/common.sh@352 -- # local d=2 00:03:35.762 07:31:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.762 07:31:01 -- scripts/common.sh@354 -- # echo 2 00:03:35.762 07:31:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:35.762 07:31:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:35.762 07:31:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:35.762 07:31:01 -- scripts/common.sh@367 -- # return 0 00:03:35.762 07:31:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.762 07:31:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.762 --rc genhtml_branch_coverage=1 00:03:35.762 --rc genhtml_function_coverage=1 00:03:35.762 --rc genhtml_legend=1 00:03:35.762 --rc geninfo_all_blocks=1 00:03:35.762 --rc geninfo_unexecuted_blocks=1 00:03:35.762 00:03:35.762 ' 00:03:35.762 07:31:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.762 --rc genhtml_branch_coverage=1 00:03:35.762 --rc genhtml_function_coverage=1 00:03:35.762 --rc genhtml_legend=1 00:03:35.762 --rc geninfo_all_blocks=1 00:03:35.762 --rc geninfo_unexecuted_blocks=1 00:03:35.762 00:03:35.762 ' 00:03:35.762 07:31:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.762 --rc genhtml_branch_coverage=1 00:03:35.762 --rc genhtml_function_coverage=1 00:03:35.762 --rc genhtml_legend=1 00:03:35.762 --rc geninfo_all_blocks=1 00:03:35.762 --rc geninfo_unexecuted_blocks=1 00:03:35.762 00:03:35.762 ' 00:03:35.762 07:31:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.762 --rc genhtml_branch_coverage=1 00:03:35.762 --rc genhtml_function_coverage=1 00:03:35.762 --rc genhtml_legend=1 00:03:35.762 --rc geninfo_all_blocks=1 00:03:35.762 --rc geninfo_unexecuted_blocks=1 00:03:35.762 00:03:35.762 ' 00:03:35.762 07:31:01 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:35.762 OK 00:03:35.762 07:31:01 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:35.762 ************************************ 00:03:35.762 END TEST rpc_client 00:03:35.762 ************************************ 00:03:35.762 00:03:35.762 real 0m0.201s 00:03:35.762 user 0m0.125s 00:03:35.762 sys 0m0.085s 00:03:35.762 07:31:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:35.762 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:03:35.762 07:31:01 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:35.762 07:31:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.762 07:31:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.762 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:03:35.762 ************************************ 00:03:35.762 START TEST json_config 00:03:35.762 ************************************ 00:03:35.762 07:31:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:36.022 07:31:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:36.022 07:31:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:36.022 07:31:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:36.022 07:31:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:36.022 07:31:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:36.022 07:31:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:36.022 07:31:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:36.022 07:31:01 -- scripts/common.sh@335 -- # IFS=.-: 00:03:36.022 07:31:01 -- scripts/common.sh@335 -- # read -ra ver1 00:03:36.022 07:31:01 -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.022 07:31:01 -- scripts/common.sh@336 -- # read -ra ver2 00:03:36.022 07:31:01 -- scripts/common.sh@337 -- # local 'op=<' 00:03:36.022 07:31:01 -- scripts/common.sh@339 -- # ver1_l=2 00:03:36.022 07:31:01 -- scripts/common.sh@340 -- # ver2_l=1 00:03:36.022 07:31:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:36.022 07:31:01 -- scripts/common.sh@343 -- # case "$op" in 00:03:36.022 07:31:01 -- scripts/common.sh@344 -- # : 1 00:03:36.022 07:31:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:36.022 07:31:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.022 07:31:01 -- scripts/common.sh@364 -- # decimal 1 00:03:36.022 07:31:01 -- scripts/common.sh@352 -- # local d=1 00:03:36.022 07:31:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.022 07:31:01 -- scripts/common.sh@354 -- # echo 1 00:03:36.022 07:31:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:36.022 07:31:01 -- scripts/common.sh@365 -- # decimal 2 00:03:36.022 07:31:01 -- scripts/common.sh@352 -- # local d=2 00:03:36.022 07:31:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.022 07:31:01 -- scripts/common.sh@354 -- # echo 2 00:03:36.022 07:31:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:36.022 07:31:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:36.022 07:31:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:36.022 07:31:01 -- scripts/common.sh@367 -- # return 0 00:03:36.022 07:31:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.022 07:31:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:36.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.022 --rc genhtml_branch_coverage=1 00:03:36.022 --rc genhtml_function_coverage=1 00:03:36.022 --rc genhtml_legend=1 00:03:36.022 --rc geninfo_all_blocks=1 00:03:36.022 --rc geninfo_unexecuted_blocks=1 00:03:36.022 00:03:36.022 ' 00:03:36.022 07:31:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:36.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.022 --rc genhtml_branch_coverage=1 00:03:36.022 --rc genhtml_function_coverage=1 00:03:36.022 --rc genhtml_legend=1 00:03:36.022 --rc geninfo_all_blocks=1 00:03:36.022 --rc geninfo_unexecuted_blocks=1 00:03:36.022 00:03:36.022 ' 00:03:36.022 07:31:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:36.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.022 --rc genhtml_branch_coverage=1 00:03:36.022 --rc genhtml_function_coverage=1 00:03:36.022 --rc genhtml_legend=1 00:03:36.022 --rc geninfo_all_blocks=1 00:03:36.022 --rc geninfo_unexecuted_blocks=1 00:03:36.022 00:03:36.022 ' 00:03:36.022 07:31:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:36.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.022 --rc genhtml_branch_coverage=1 00:03:36.022 --rc genhtml_function_coverage=1 00:03:36.022 --rc genhtml_legend=1 00:03:36.022 --rc geninfo_all_blocks=1 00:03:36.022 --rc geninfo_unexecuted_blocks=1 00:03:36.022 00:03:36.022 ' 00:03:36.022 07:31:01 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:36.022 07:31:01 -- nvmf/common.sh@7 -- # uname -s 00:03:36.022 07:31:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:36.022 07:31:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:36.022 07:31:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:36.022 07:31:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:36.022 07:31:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:36.022 07:31:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:36.022 07:31:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:36.022 07:31:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:36.022 07:31:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:36.022 07:31:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:36.022 07:31:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:03:36.022 07:31:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:03:36.022 07:31:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:36.022 07:31:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:36.022 07:31:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:36.022 07:31:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:36.022 07:31:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:36.022 07:31:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:36.022 07:31:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:36.022 07:31:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.022 07:31:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.022 07:31:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.022 07:31:01 -- paths/export.sh@5 -- # export PATH 00:03:36.022 07:31:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.022 07:31:01 -- nvmf/common.sh@46 -- # : 0 00:03:36.022 07:31:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:36.022 07:31:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:36.022 07:31:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:36.022 07:31:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:36.022 07:31:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:36.023 07:31:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:36.023 07:31:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:36.023 07:31:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:36.023 07:31:01 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:36.023 07:31:01 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:36.023 07:31:01 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:36.023 07:31:01 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:36.023 07:31:01 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:36.023 07:31:01 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:36.023 07:31:01 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:36.023 07:31:01 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:36.023 07:31:01 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:36.023 07:31:01 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:36.023 07:31:01 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:36.023 07:31:01 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:36.023 07:31:01 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:36.023 07:31:01 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:36.023 INFO: JSON configuration test init 00:03:36.023 07:31:01 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:36.023 07:31:01 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:36.023 07:31:01 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:36.023 07:31:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:36.023 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:03:36.023 07:31:01 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:36.023 07:31:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:36.023 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:03:36.023 07:31:01 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:36.023 07:31:01 -- json_config/json_config.sh@98 -- # local app=target 00:03:36.023 07:31:01 -- json_config/json_config.sh@99 -- # shift 00:03:36.023 07:31:01 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:36.023 07:31:01 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:36.023 07:31:01 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:36.023 07:31:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:36.023 07:31:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:36.023 07:31:01 -- json_config/json_config.sh@111 -- # app_pid[$app]=54049 00:03:36.023 07:31:01 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:36.023 Waiting for target to run... 00:03:36.023 07:31:01 -- json_config/json_config.sh@114 -- # waitforlisten 54049 /var/tmp/spdk_tgt.sock 00:03:36.023 07:31:01 -- common/autotest_common.sh@829 -- # '[' -z 54049 ']' 00:03:36.023 07:31:01 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:36.023 07:31:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:36.023 07:31:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:36.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:36.023 07:31:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:36.023 07:31:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:36.023 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:03:36.023 [2024-12-02 07:31:01.602342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:36.023 [2024-12-02 07:31:01.602447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54049 ] 00:03:36.282 [2024-12-02 07:31:01.865805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.282 [2024-12-02 07:31:01.903240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:36.282 [2024-12-02 07:31:01.903444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.218 07:31:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:37.218 07:31:02 -- common/autotest_common.sh@862 -- # return 0 00:03:37.218 00:03:37.218 07:31:02 -- json_config/json_config.sh@115 -- # echo '' 00:03:37.218 07:31:02 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:37.218 07:31:02 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:37.218 07:31:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:37.218 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:03:37.218 07:31:02 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:37.218 07:31:02 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:37.218 07:31:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:37.218 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:03:37.218 07:31:02 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:37.218 07:31:02 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:37.218 07:31:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:37.477 07:31:03 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:37.478 07:31:03 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:37.478 07:31:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:37.478 07:31:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.478 07:31:03 -- json_config/json_config.sh@48 -- # local ret=0 00:03:37.478 07:31:03 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:37.478 07:31:03 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:37.478 07:31:03 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:37.478 07:31:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:37.478 07:31:03 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:37.737 07:31:03 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:37.737 07:31:03 -- json_config/json_config.sh@51 -- # local get_types 00:03:37.737 07:31:03 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:37.737 07:31:03 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:37.737 07:31:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:37.737 07:31:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.737 07:31:03 -- json_config/json_config.sh@58 -- # return 0 00:03:37.737 07:31:03 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:03:37.737 07:31:03 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:37.737 07:31:03 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:37.737 07:31:03 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:03:37.737 07:31:03 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:03:37.737 07:31:03 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:03:37.737 07:31:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:37.737 07:31:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.737 07:31:03 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:37.737 07:31:03 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:03:37.737 07:31:03 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:03:37.737 07:31:03 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:37.737 07:31:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:37.997 MallocForNvmf0 00:03:37.997 07:31:03 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:37.997 07:31:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:38.256 MallocForNvmf1 00:03:38.256 07:31:03 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:38.256 07:31:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:38.516 [2024-12-02 07:31:04.100583] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:38.516 07:31:04 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:38.516 07:31:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:38.774 07:31:04 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:38.774 07:31:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:39.032 07:31:04 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:39.032 07:31:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:39.291 07:31:04 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:39.291 07:31:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:39.291 [2024-12-02 07:31:04.868962] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:39.291 07:31:04 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:03:39.291 07:31:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:39.291 07:31:04 -- common/autotest_common.sh@10 -- # set +x 00:03:39.551 07:31:04 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:39.551 07:31:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:39.551 07:31:04 -- common/autotest_common.sh@10 -- # set +x 00:03:39.551 07:31:04 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:39.551 07:31:04 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:39.551 07:31:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:39.809 MallocBdevForConfigChangeCheck 00:03:39.809 07:31:05 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:39.809 07:31:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:39.809 07:31:05 -- common/autotest_common.sh@10 -- # set +x 00:03:39.809 07:31:05 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:39.809 07:31:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:40.067 INFO: shutting down applications... 00:03:40.067 07:31:05 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:40.067 07:31:05 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:40.067 07:31:05 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:40.067 07:31:05 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:40.067 07:31:05 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:40.326 Calling clear_iscsi_subsystem 00:03:40.326 Calling clear_nvmf_subsystem 00:03:40.326 Calling clear_nbd_subsystem 00:03:40.326 Calling clear_ublk_subsystem 00:03:40.326 Calling clear_vhost_blk_subsystem 00:03:40.326 Calling clear_vhost_scsi_subsystem 00:03:40.326 Calling clear_scheduler_subsystem 00:03:40.326 Calling clear_bdev_subsystem 00:03:40.326 Calling clear_accel_subsystem 00:03:40.326 Calling clear_vmd_subsystem 00:03:40.326 Calling clear_sock_subsystem 00:03:40.326 Calling clear_iobuf_subsystem 00:03:40.326 07:31:05 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:40.326 07:31:05 -- json_config/json_config.sh@396 -- # count=100 00:03:40.326 07:31:05 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:40.326 07:31:05 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:40.326 07:31:05 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:40.326 07:31:05 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:40.893 07:31:06 -- json_config/json_config.sh@398 -- # break 00:03:40.893 07:31:06 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:40.893 07:31:06 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:40.893 07:31:06 -- json_config/json_config.sh@120 -- # local app=target 00:03:40.893 07:31:06 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:40.893 07:31:06 -- json_config/json_config.sh@124 -- # [[ -n 54049 ]] 00:03:40.893 07:31:06 -- json_config/json_config.sh@127 -- # kill -SIGINT 54049 00:03:40.893 07:31:06 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:40.893 07:31:06 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:40.893 07:31:06 -- json_config/json_config.sh@130 -- # kill -0 54049 00:03:40.893 07:31:06 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:41.462 07:31:06 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:41.462 07:31:06 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:41.462 07:31:06 -- json_config/json_config.sh@130 -- # kill -0 54049 00:03:41.462 07:31:06 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:41.462 07:31:06 -- json_config/json_config.sh@132 -- # break 00:03:41.462 07:31:06 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:41.462 SPDK target shutdown done 00:03:41.462 07:31:06 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:41.462 INFO: relaunching applications... 00:03:41.462 07:31:06 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:41.462 07:31:06 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:41.462 07:31:06 -- json_config/json_config.sh@98 -- # local app=target 00:03:41.462 07:31:06 -- json_config/json_config.sh@99 -- # shift 00:03:41.462 07:31:06 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:41.462 07:31:06 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:41.462 07:31:06 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:41.462 07:31:06 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:41.462 07:31:06 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:41.462 07:31:06 -- json_config/json_config.sh@111 -- # app_pid[$app]=54238 00:03:41.462 Waiting for target to run... 00:03:41.462 07:31:06 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:41.462 07:31:06 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:41.462 07:31:06 -- json_config/json_config.sh@114 -- # waitforlisten 54238 /var/tmp/spdk_tgt.sock 00:03:41.462 07:31:06 -- common/autotest_common.sh@829 -- # '[' -z 54238 ']' 00:03:41.462 07:31:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:41.462 07:31:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:41.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:41.462 07:31:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:41.462 07:31:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:41.462 07:31:06 -- common/autotest_common.sh@10 -- # set +x 00:03:41.462 [2024-12-02 07:31:06.942103] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:41.462 [2024-12-02 07:31:06.942216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54238 ] 00:03:41.721 [2024-12-02 07:31:07.239700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.721 [2024-12-02 07:31:07.275844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:41.721 [2024-12-02 07:31:07.276000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.979 [2024-12-02 07:31:07.570943] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:42.238 [2024-12-02 07:31:07.603080] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:42.238 07:31:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:42.238 00:03:42.238 07:31:07 -- common/autotest_common.sh@862 -- # return 0 00:03:42.238 07:31:07 -- json_config/json_config.sh@115 -- # echo '' 00:03:42.238 07:31:07 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:42.238 INFO: Checking if target configuration is the same... 00:03:42.238 07:31:07 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:42.238 07:31:07 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:42.238 07:31:07 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:42.238 07:31:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:42.238 + '[' 2 -ne 2 ']' 00:03:42.238 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:42.238 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:42.496 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:42.496 +++ basename /dev/fd/62 00:03:42.496 ++ mktemp /tmp/62.XXX 00:03:42.496 + tmp_file_1=/tmp/62.JU3 00:03:42.496 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:42.496 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:42.496 + tmp_file_2=/tmp/spdk_tgt_config.json.kH4 00:03:42.496 + ret=0 00:03:42.496 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:42.754 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:42.754 + diff -u /tmp/62.JU3 /tmp/spdk_tgt_config.json.kH4 00:03:42.754 INFO: JSON config files are the same 00:03:42.754 + echo 'INFO: JSON config files are the same' 00:03:42.754 + rm /tmp/62.JU3 /tmp/spdk_tgt_config.json.kH4 00:03:42.754 + exit 0 00:03:42.754 07:31:08 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:42.754 INFO: changing configuration and checking if this can be detected... 00:03:42.754 07:31:08 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:42.754 07:31:08 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:42.754 07:31:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:43.012 07:31:08 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:43.012 07:31:08 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:43.012 07:31:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.012 + '[' 2 -ne 2 ']' 00:03:43.012 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:43.012 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:43.012 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:43.012 +++ basename /dev/fd/62 00:03:43.012 ++ mktemp /tmp/62.XXX 00:03:43.012 + tmp_file_1=/tmp/62.scA 00:03:43.012 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:43.012 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:43.012 + tmp_file_2=/tmp/spdk_tgt_config.json.RKs 00:03:43.012 + ret=0 00:03:43.012 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:43.271 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:43.530 + diff -u /tmp/62.scA /tmp/spdk_tgt_config.json.RKs 00:03:43.530 + ret=1 00:03:43.530 + echo '=== Start of file: /tmp/62.scA ===' 00:03:43.530 + cat /tmp/62.scA 00:03:43.530 + echo '=== End of file: /tmp/62.scA ===' 00:03:43.530 + echo '' 00:03:43.530 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RKs ===' 00:03:43.530 + cat /tmp/spdk_tgt_config.json.RKs 00:03:43.530 + echo '=== End of file: /tmp/spdk_tgt_config.json.RKs ===' 00:03:43.530 + echo '' 00:03:43.530 + rm /tmp/62.scA /tmp/spdk_tgt_config.json.RKs 00:03:43.530 + exit 1 00:03:43.530 INFO: configuration change detected. 00:03:43.530 07:31:08 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:43.530 07:31:08 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:43.530 07:31:08 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:43.530 07:31:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.530 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:03:43.530 07:31:08 -- json_config/json_config.sh@360 -- # local ret=0 00:03:43.530 07:31:08 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:43.530 07:31:08 -- json_config/json_config.sh@370 -- # [[ -n 54238 ]] 00:03:43.530 07:31:08 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:43.530 07:31:08 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:43.530 07:31:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.530 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:03:43.530 07:31:08 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:03:43.530 07:31:08 -- json_config/json_config.sh@246 -- # uname -s 00:03:43.530 07:31:08 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:03:43.530 07:31:08 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:03:43.530 07:31:08 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:43.530 07:31:08 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:43.530 07:31:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:43.531 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:03:43.531 07:31:08 -- json_config/json_config.sh@376 -- # killprocess 54238 00:03:43.531 07:31:08 -- common/autotest_common.sh@936 -- # '[' -z 54238 ']' 00:03:43.531 07:31:08 -- common/autotest_common.sh@940 -- # kill -0 54238 00:03:43.531 07:31:08 -- common/autotest_common.sh@941 -- # uname 00:03:43.531 07:31:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:43.531 07:31:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54238 00:03:43.531 07:31:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:43.531 07:31:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:43.531 killing process with pid 54238 00:03:43.531 07:31:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54238' 00:03:43.531 07:31:08 -- common/autotest_common.sh@955 -- # kill 54238 00:03:43.531 07:31:08 -- common/autotest_common.sh@960 -- # wait 54238 00:03:43.790 07:31:09 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:43.790 07:31:09 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:43.790 07:31:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:43.790 07:31:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.790 07:31:09 -- json_config/json_config.sh@381 -- # return 0 00:03:43.790 INFO: Success 00:03:43.790 07:31:09 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:43.790 00:03:43.790 real 0m7.858s 00:03:43.790 user 0m11.359s 00:03:43.790 sys 0m1.276s 00:03:43.790 07:31:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.790 ************************************ 00:03:43.790 END TEST json_config 00:03:43.790 ************************************ 00:03:43.790 07:31:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.790 07:31:09 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:43.790 07:31:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.790 07:31:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.790 07:31:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.790 ************************************ 00:03:43.790 START TEST json_config_extra_key 00:03:43.790 ************************************ 00:03:43.790 07:31:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:43.790 07:31:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:43.790 07:31:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:43.790 07:31:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:43.790 07:31:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:43.790 07:31:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:43.790 07:31:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:43.790 07:31:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:43.790 07:31:09 -- scripts/common.sh@335 -- # IFS=.-: 00:03:43.790 07:31:09 -- scripts/common.sh@335 -- # read -ra ver1 00:03:43.790 07:31:09 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.790 07:31:09 -- scripts/common.sh@336 -- # read -ra ver2 00:03:43.790 07:31:09 -- scripts/common.sh@337 -- # local 'op=<' 00:03:43.790 07:31:09 -- scripts/common.sh@339 -- # ver1_l=2 00:03:43.790 07:31:09 -- scripts/common.sh@340 -- # ver2_l=1 00:03:43.790 07:31:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:43.790 07:31:09 -- scripts/common.sh@343 -- # case "$op" in 00:03:43.790 07:31:09 -- scripts/common.sh@344 -- # : 1 00:03:43.790 07:31:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:43.790 07:31:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.790 07:31:09 -- scripts/common.sh@364 -- # decimal 1 00:03:43.790 07:31:09 -- scripts/common.sh@352 -- # local d=1 00:03:43.790 07:31:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.790 07:31:09 -- scripts/common.sh@354 -- # echo 1 00:03:43.790 07:31:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:43.790 07:31:09 -- scripts/common.sh@365 -- # decimal 2 00:03:43.790 07:31:09 -- scripts/common.sh@352 -- # local d=2 00:03:43.790 07:31:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.790 07:31:09 -- scripts/common.sh@354 -- # echo 2 00:03:43.790 07:31:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:43.790 07:31:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:43.790 07:31:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:43.790 07:31:09 -- scripts/common.sh@367 -- # return 0 00:03:43.790 07:31:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.790 07:31:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.790 --rc genhtml_branch_coverage=1 00:03:43.790 --rc genhtml_function_coverage=1 00:03:43.790 --rc genhtml_legend=1 00:03:43.790 --rc geninfo_all_blocks=1 00:03:43.790 --rc geninfo_unexecuted_blocks=1 00:03:43.790 00:03:43.790 ' 00:03:43.790 07:31:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.790 --rc genhtml_branch_coverage=1 00:03:43.790 --rc genhtml_function_coverage=1 00:03:43.790 --rc genhtml_legend=1 00:03:43.790 --rc geninfo_all_blocks=1 00:03:43.790 --rc geninfo_unexecuted_blocks=1 00:03:43.790 00:03:43.790 ' 00:03:43.790 07:31:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.790 --rc genhtml_branch_coverage=1 00:03:43.790 --rc genhtml_function_coverage=1 00:03:43.790 --rc genhtml_legend=1 00:03:43.790 --rc geninfo_all_blocks=1 00:03:43.790 --rc geninfo_unexecuted_blocks=1 00:03:43.790 00:03:43.790 ' 00:03:43.790 07:31:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.790 --rc genhtml_branch_coverage=1 00:03:43.790 --rc genhtml_function_coverage=1 00:03:43.790 --rc genhtml_legend=1 00:03:43.790 --rc geninfo_all_blocks=1 00:03:43.790 --rc geninfo_unexecuted_blocks=1 00:03:43.790 00:03:43.790 ' 00:03:43.790 07:31:09 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.790 07:31:09 -- nvmf/common.sh@7 -- # uname -s 00:03:43.790 07:31:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.790 07:31:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.790 07:31:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.790 07:31:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.790 07:31:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.790 07:31:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.790 07:31:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.790 07:31:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.790 07:31:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.050 07:31:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.050 07:31:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:03:44.050 07:31:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:03:44.050 07:31:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.050 07:31:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.050 07:31:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.050 07:31:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:44.050 07:31:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.050 07:31:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.050 07:31:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.050 07:31:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.050 07:31:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.050 07:31:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.050 07:31:09 -- paths/export.sh@5 -- # export PATH 00:03:44.050 07:31:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.050 07:31:09 -- nvmf/common.sh@46 -- # : 0 00:03:44.050 07:31:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:44.050 07:31:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:44.050 07:31:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:44.050 07:31:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.050 07:31:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.050 07:31:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:44.050 07:31:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:44.050 07:31:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:44.050 INFO: launching applications... 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54386 00:03:44.050 Waiting for target to run... 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54386 /var/tmp/spdk_tgt.sock 00:03:44.050 07:31:09 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:44.050 07:31:09 -- common/autotest_common.sh@829 -- # '[' -z 54386 ']' 00:03:44.050 07:31:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.050 07:31:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:44.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.050 07:31:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.050 07:31:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:44.050 07:31:09 -- common/autotest_common.sh@10 -- # set +x 00:03:44.050 [2024-12-02 07:31:09.480010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:44.050 [2024-12-02 07:31:09.480117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54386 ] 00:03:44.309 [2024-12-02 07:31:09.745632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.309 [2024-12-02 07:31:09.782129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:44.309 [2024-12-02 07:31:09.782295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.876 07:31:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:44.876 00:03:44.876 07:31:10 -- common/autotest_common.sh@862 -- # return 0 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:03:44.876 INFO: shutting down applications... 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54386 ]] 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54386 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54386 00:03:44.876 07:31:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54386 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@52 -- # break 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:03:45.444 SPDK target shutdown done 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:03:45.444 Success 00:03:45.444 07:31:10 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:03:45.445 00:03:45.445 real 0m1.724s 00:03:45.445 user 0m1.635s 00:03:45.445 sys 0m0.289s 00:03:45.445 07:31:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.445 ************************************ 00:03:45.445 END TEST json_config_extra_key 00:03:45.445 ************************************ 00:03:45.445 07:31:10 -- common/autotest_common.sh@10 -- # set +x 00:03:45.445 07:31:11 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:45.445 07:31:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.445 07:31:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.445 07:31:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.445 ************************************ 00:03:45.445 START TEST alias_rpc 00:03:45.445 ************************************ 00:03:45.445 07:31:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:45.704 * Looking for test storage... 00:03:45.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:03:45.704 07:31:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:45.704 07:31:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:45.704 07:31:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:45.704 07:31:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:45.704 07:31:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:45.704 07:31:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:45.704 07:31:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:45.704 07:31:11 -- scripts/common.sh@335 -- # IFS=.-: 00:03:45.704 07:31:11 -- scripts/common.sh@335 -- # read -ra ver1 00:03:45.704 07:31:11 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.704 07:31:11 -- scripts/common.sh@336 -- # read -ra ver2 00:03:45.704 07:31:11 -- scripts/common.sh@337 -- # local 'op=<' 00:03:45.704 07:31:11 -- scripts/common.sh@339 -- # ver1_l=2 00:03:45.704 07:31:11 -- scripts/common.sh@340 -- # ver2_l=1 00:03:45.704 07:31:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:45.704 07:31:11 -- scripts/common.sh@343 -- # case "$op" in 00:03:45.704 07:31:11 -- scripts/common.sh@344 -- # : 1 00:03:45.704 07:31:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:45.704 07:31:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.704 07:31:11 -- scripts/common.sh@364 -- # decimal 1 00:03:45.704 07:31:11 -- scripts/common.sh@352 -- # local d=1 00:03:45.704 07:31:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.704 07:31:11 -- scripts/common.sh@354 -- # echo 1 00:03:45.704 07:31:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:45.704 07:31:11 -- scripts/common.sh@365 -- # decimal 2 00:03:45.704 07:31:11 -- scripts/common.sh@352 -- # local d=2 00:03:45.704 07:31:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.704 07:31:11 -- scripts/common.sh@354 -- # echo 2 00:03:45.704 07:31:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:45.704 07:31:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:45.704 07:31:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:45.704 07:31:11 -- scripts/common.sh@367 -- # return 0 00:03:45.704 07:31:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.704 07:31:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:45.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.704 --rc genhtml_branch_coverage=1 00:03:45.704 --rc genhtml_function_coverage=1 00:03:45.704 --rc genhtml_legend=1 00:03:45.704 --rc geninfo_all_blocks=1 00:03:45.704 --rc geninfo_unexecuted_blocks=1 00:03:45.704 00:03:45.704 ' 00:03:45.704 07:31:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:45.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.704 --rc genhtml_branch_coverage=1 00:03:45.704 --rc genhtml_function_coverage=1 00:03:45.704 --rc genhtml_legend=1 00:03:45.704 --rc geninfo_all_blocks=1 00:03:45.704 --rc geninfo_unexecuted_blocks=1 00:03:45.704 00:03:45.704 ' 00:03:45.704 07:31:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:45.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.704 --rc genhtml_branch_coverage=1 00:03:45.704 --rc genhtml_function_coverage=1 00:03:45.704 --rc genhtml_legend=1 00:03:45.704 --rc geninfo_all_blocks=1 00:03:45.704 --rc geninfo_unexecuted_blocks=1 00:03:45.704 00:03:45.704 ' 00:03:45.704 07:31:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:45.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.704 --rc genhtml_branch_coverage=1 00:03:45.704 --rc genhtml_function_coverage=1 00:03:45.704 --rc genhtml_legend=1 00:03:45.704 --rc geninfo_all_blocks=1 00:03:45.704 --rc geninfo_unexecuted_blocks=1 00:03:45.704 00:03:45.704 ' 00:03:45.704 07:31:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:45.704 07:31:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54463 00:03:45.704 07:31:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54463 00:03:45.704 07:31:11 -- common/autotest_common.sh@829 -- # '[' -z 54463 ']' 00:03:45.704 07:31:11 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:45.704 07:31:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.704 07:31:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:45.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.704 07:31:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.704 07:31:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:45.704 07:31:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.704 [2024-12-02 07:31:11.259771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:45.704 [2024-12-02 07:31:11.259874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54463 ] 00:03:45.964 [2024-12-02 07:31:11.389475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.964 [2024-12-02 07:31:11.437009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:45.964 [2024-12-02 07:31:11.437182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.900 07:31:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:46.900 07:31:12 -- common/autotest_common.sh@862 -- # return 0 00:03:46.900 07:31:12 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:03:47.160 07:31:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54463 00:03:47.160 07:31:12 -- common/autotest_common.sh@936 -- # '[' -z 54463 ']' 00:03:47.160 07:31:12 -- common/autotest_common.sh@940 -- # kill -0 54463 00:03:47.160 07:31:12 -- common/autotest_common.sh@941 -- # uname 00:03:47.160 07:31:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:47.160 07:31:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54463 00:03:47.160 07:31:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:47.160 killing process with pid 54463 00:03:47.160 07:31:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:47.160 07:31:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54463' 00:03:47.160 07:31:12 -- common/autotest_common.sh@955 -- # kill 54463 00:03:47.160 07:31:12 -- common/autotest_common.sh@960 -- # wait 54463 00:03:47.420 00:03:47.420 real 0m1.766s 00:03:47.420 user 0m2.163s 00:03:47.420 sys 0m0.316s 00:03:47.420 ************************************ 00:03:47.420 END TEST alias_rpc 00:03:47.420 ************************************ 00:03:47.420 07:31:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.420 07:31:12 -- common/autotest_common.sh@10 -- # set +x 00:03:47.420 07:31:12 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:03:47.420 07:31:12 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:47.420 07:31:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.420 07:31:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.420 07:31:12 -- common/autotest_common.sh@10 -- # set +x 00:03:47.420 ************************************ 00:03:47.420 START TEST spdkcli_tcp 00:03:47.420 ************************************ 00:03:47.420 07:31:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:03:47.420 * Looking for test storage... 00:03:47.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:03:47.420 07:31:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:47.420 07:31:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:47.420 07:31:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:47.420 07:31:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:47.420 07:31:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:47.420 07:31:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:47.420 07:31:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:47.420 07:31:13 -- scripts/common.sh@335 -- # IFS=.-: 00:03:47.420 07:31:13 -- scripts/common.sh@335 -- # read -ra ver1 00:03:47.420 07:31:13 -- scripts/common.sh@336 -- # IFS=.-: 00:03:47.420 07:31:13 -- scripts/common.sh@336 -- # read -ra ver2 00:03:47.420 07:31:13 -- scripts/common.sh@337 -- # local 'op=<' 00:03:47.420 07:31:13 -- scripts/common.sh@339 -- # ver1_l=2 00:03:47.420 07:31:13 -- scripts/common.sh@340 -- # ver2_l=1 00:03:47.420 07:31:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:47.420 07:31:13 -- scripts/common.sh@343 -- # case "$op" in 00:03:47.420 07:31:13 -- scripts/common.sh@344 -- # : 1 00:03:47.420 07:31:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:47.420 07:31:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:47.420 07:31:13 -- scripts/common.sh@364 -- # decimal 1 00:03:47.420 07:31:13 -- scripts/common.sh@352 -- # local d=1 00:03:47.420 07:31:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:47.420 07:31:13 -- scripts/common.sh@354 -- # echo 1 00:03:47.420 07:31:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:47.420 07:31:13 -- scripts/common.sh@365 -- # decimal 2 00:03:47.420 07:31:13 -- scripts/common.sh@352 -- # local d=2 00:03:47.420 07:31:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:47.420 07:31:13 -- scripts/common.sh@354 -- # echo 2 00:03:47.420 07:31:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:47.420 07:31:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:47.420 07:31:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:47.420 07:31:13 -- scripts/common.sh@367 -- # return 0 00:03:47.420 07:31:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:47.420 07:31:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:47.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.420 --rc genhtml_branch_coverage=1 00:03:47.420 --rc genhtml_function_coverage=1 00:03:47.420 --rc genhtml_legend=1 00:03:47.420 --rc geninfo_all_blocks=1 00:03:47.420 --rc geninfo_unexecuted_blocks=1 00:03:47.420 00:03:47.420 ' 00:03:47.420 07:31:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:47.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.420 --rc genhtml_branch_coverage=1 00:03:47.420 --rc genhtml_function_coverage=1 00:03:47.420 --rc genhtml_legend=1 00:03:47.420 --rc geninfo_all_blocks=1 00:03:47.420 --rc geninfo_unexecuted_blocks=1 00:03:47.420 00:03:47.420 ' 00:03:47.420 07:31:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:47.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.420 --rc genhtml_branch_coverage=1 00:03:47.420 --rc genhtml_function_coverage=1 00:03:47.420 --rc genhtml_legend=1 00:03:47.420 --rc geninfo_all_blocks=1 00:03:47.420 --rc geninfo_unexecuted_blocks=1 00:03:47.420 00:03:47.420 ' 00:03:47.420 07:31:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:47.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:47.420 --rc genhtml_branch_coverage=1 00:03:47.420 --rc genhtml_function_coverage=1 00:03:47.420 --rc genhtml_legend=1 00:03:47.420 --rc geninfo_all_blocks=1 00:03:47.420 --rc geninfo_unexecuted_blocks=1 00:03:47.420 00:03:47.420 ' 00:03:47.420 07:31:13 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:03:47.420 07:31:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:03:47.420 07:31:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:03:47.420 07:31:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:47.420 07:31:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:47.420 07:31:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:47.420 07:31:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:47.421 07:31:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.421 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:47.421 07:31:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54540 00:03:47.421 07:31:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 54540 00:03:47.421 07:31:13 -- common/autotest_common.sh@829 -- # '[' -z 54540 ']' 00:03:47.421 07:31:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.421 07:31:13 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:47.421 07:31:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:47.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.421 07:31:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.421 07:31:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:47.421 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:47.680 [2024-12-02 07:31:13.092271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:47.680 [2024-12-02 07:31:13.092381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54540 ] 00:03:47.680 [2024-12-02 07:31:13.226076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:47.680 [2024-12-02 07:31:13.275659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:47.680 [2024-12-02 07:31:13.277352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:47.680 [2024-12-02 07:31:13.277374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.616 07:31:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:48.616 07:31:14 -- common/autotest_common.sh@862 -- # return 0 00:03:48.616 07:31:14 -- spdkcli/tcp.sh@31 -- # socat_pid=54563 00:03:48.616 07:31:14 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:48.616 07:31:14 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:48.876 [ 00:03:48.876 "bdev_malloc_delete", 00:03:48.876 "bdev_malloc_create", 00:03:48.876 "bdev_null_resize", 00:03:48.876 "bdev_null_delete", 00:03:48.876 "bdev_null_create", 00:03:48.876 "bdev_nvme_cuse_unregister", 00:03:48.876 "bdev_nvme_cuse_register", 00:03:48.876 "bdev_opal_new_user", 00:03:48.876 "bdev_opal_set_lock_state", 00:03:48.876 "bdev_opal_delete", 00:03:48.876 "bdev_opal_get_info", 00:03:48.876 "bdev_opal_create", 00:03:48.876 "bdev_nvme_opal_revert", 00:03:48.876 "bdev_nvme_opal_init", 00:03:48.876 "bdev_nvme_send_cmd", 00:03:48.876 "bdev_nvme_get_path_iostat", 00:03:48.876 "bdev_nvme_get_mdns_discovery_info", 00:03:48.876 "bdev_nvme_stop_mdns_discovery", 00:03:48.876 "bdev_nvme_start_mdns_discovery", 00:03:48.876 "bdev_nvme_set_multipath_policy", 00:03:48.876 "bdev_nvme_set_preferred_path", 00:03:48.876 "bdev_nvme_get_io_paths", 00:03:48.876 "bdev_nvme_remove_error_injection", 00:03:48.876 "bdev_nvme_add_error_injection", 00:03:48.876 "bdev_nvme_get_discovery_info", 00:03:48.876 "bdev_nvme_stop_discovery", 00:03:48.876 "bdev_nvme_start_discovery", 00:03:48.876 "bdev_nvme_get_controller_health_info", 00:03:48.876 "bdev_nvme_disable_controller", 00:03:48.876 "bdev_nvme_enable_controller", 00:03:48.876 "bdev_nvme_reset_controller", 00:03:48.876 "bdev_nvme_get_transport_statistics", 00:03:48.876 "bdev_nvme_apply_firmware", 00:03:48.876 "bdev_nvme_detach_controller", 00:03:48.876 "bdev_nvme_get_controllers", 00:03:48.876 "bdev_nvme_attach_controller", 00:03:48.876 "bdev_nvme_set_hotplug", 00:03:48.876 "bdev_nvme_set_options", 00:03:48.876 "bdev_passthru_delete", 00:03:48.876 "bdev_passthru_create", 00:03:48.876 "bdev_lvol_grow_lvstore", 00:03:48.876 "bdev_lvol_get_lvols", 00:03:48.876 "bdev_lvol_get_lvstores", 00:03:48.876 "bdev_lvol_delete", 00:03:48.876 "bdev_lvol_set_read_only", 00:03:48.876 "bdev_lvol_resize", 00:03:48.876 "bdev_lvol_decouple_parent", 00:03:48.876 "bdev_lvol_inflate", 00:03:48.876 "bdev_lvol_rename", 00:03:48.876 "bdev_lvol_clone_bdev", 00:03:48.876 "bdev_lvol_clone", 00:03:48.876 "bdev_lvol_snapshot", 00:03:48.876 "bdev_lvol_create", 00:03:48.876 "bdev_lvol_delete_lvstore", 00:03:48.876 "bdev_lvol_rename_lvstore", 00:03:48.876 "bdev_lvol_create_lvstore", 00:03:48.876 "bdev_raid_set_options", 00:03:48.876 "bdev_raid_remove_base_bdev", 00:03:48.876 "bdev_raid_add_base_bdev", 00:03:48.876 "bdev_raid_delete", 00:03:48.876 "bdev_raid_create", 00:03:48.876 "bdev_raid_get_bdevs", 00:03:48.876 "bdev_error_inject_error", 00:03:48.876 "bdev_error_delete", 00:03:48.876 "bdev_error_create", 00:03:48.876 "bdev_split_delete", 00:03:48.876 "bdev_split_create", 00:03:48.876 "bdev_delay_delete", 00:03:48.876 "bdev_delay_create", 00:03:48.876 "bdev_delay_update_latency", 00:03:48.876 "bdev_zone_block_delete", 00:03:48.876 "bdev_zone_block_create", 00:03:48.876 "blobfs_create", 00:03:48.876 "blobfs_detect", 00:03:48.876 "blobfs_set_cache_size", 00:03:48.876 "bdev_aio_delete", 00:03:48.876 "bdev_aio_rescan", 00:03:48.876 "bdev_aio_create", 00:03:48.876 "bdev_ftl_set_property", 00:03:48.876 "bdev_ftl_get_properties", 00:03:48.876 "bdev_ftl_get_stats", 00:03:48.876 "bdev_ftl_unmap", 00:03:48.876 "bdev_ftl_unload", 00:03:48.876 "bdev_ftl_delete", 00:03:48.876 "bdev_ftl_load", 00:03:48.876 "bdev_ftl_create", 00:03:48.876 "bdev_virtio_attach_controller", 00:03:48.876 "bdev_virtio_scsi_get_devices", 00:03:48.876 "bdev_virtio_detach_controller", 00:03:48.876 "bdev_virtio_blk_set_hotplug", 00:03:48.876 "bdev_iscsi_delete", 00:03:48.876 "bdev_iscsi_create", 00:03:48.876 "bdev_iscsi_set_options", 00:03:48.876 "bdev_uring_delete", 00:03:48.876 "bdev_uring_create", 00:03:48.876 "accel_error_inject_error", 00:03:48.876 "ioat_scan_accel_module", 00:03:48.876 "dsa_scan_accel_module", 00:03:48.876 "iaa_scan_accel_module", 00:03:48.876 "vfu_virtio_create_scsi_endpoint", 00:03:48.876 "vfu_virtio_scsi_remove_target", 00:03:48.876 "vfu_virtio_scsi_add_target", 00:03:48.876 "vfu_virtio_create_blk_endpoint", 00:03:48.876 "vfu_virtio_delete_endpoint", 00:03:48.876 "iscsi_set_options", 00:03:48.876 "iscsi_get_auth_groups", 00:03:48.876 "iscsi_auth_group_remove_secret", 00:03:48.876 "iscsi_auth_group_add_secret", 00:03:48.876 "iscsi_delete_auth_group", 00:03:48.876 "iscsi_create_auth_group", 00:03:48.876 "iscsi_set_discovery_auth", 00:03:48.876 "iscsi_get_options", 00:03:48.876 "iscsi_target_node_request_logout", 00:03:48.876 "iscsi_target_node_set_redirect", 00:03:48.876 "iscsi_target_node_set_auth", 00:03:48.876 "iscsi_target_node_add_lun", 00:03:48.876 "iscsi_get_connections", 00:03:48.876 "iscsi_portal_group_set_auth", 00:03:48.876 "iscsi_start_portal_group", 00:03:48.876 "iscsi_delete_portal_group", 00:03:48.876 "iscsi_create_portal_group", 00:03:48.876 "iscsi_get_portal_groups", 00:03:48.876 "iscsi_delete_target_node", 00:03:48.876 "iscsi_target_node_remove_pg_ig_maps", 00:03:48.876 "iscsi_target_node_add_pg_ig_maps", 00:03:48.876 "iscsi_create_target_node", 00:03:48.876 "iscsi_get_target_nodes", 00:03:48.876 "iscsi_delete_initiator_group", 00:03:48.876 "iscsi_initiator_group_remove_initiators", 00:03:48.876 "iscsi_initiator_group_add_initiators", 00:03:48.876 "iscsi_create_initiator_group", 00:03:48.876 "iscsi_get_initiator_groups", 00:03:48.876 "nvmf_set_crdt", 00:03:48.876 "nvmf_set_config", 00:03:48.876 "nvmf_set_max_subsystems", 00:03:48.876 "nvmf_subsystem_get_listeners", 00:03:48.876 "nvmf_subsystem_get_qpairs", 00:03:48.876 "nvmf_subsystem_get_controllers", 00:03:48.876 "nvmf_get_stats", 00:03:48.876 "nvmf_get_transports", 00:03:48.876 "nvmf_create_transport", 00:03:48.876 "nvmf_get_targets", 00:03:48.876 "nvmf_delete_target", 00:03:48.876 "nvmf_create_target", 00:03:48.876 "nvmf_subsystem_allow_any_host", 00:03:48.876 "nvmf_subsystem_remove_host", 00:03:48.876 "nvmf_subsystem_add_host", 00:03:48.876 "nvmf_subsystem_remove_ns", 00:03:48.876 "nvmf_subsystem_add_ns", 00:03:48.876 "nvmf_subsystem_listener_set_ana_state", 00:03:48.876 "nvmf_discovery_get_referrals", 00:03:48.876 "nvmf_discovery_remove_referral", 00:03:48.876 "nvmf_discovery_add_referral", 00:03:48.876 "nvmf_subsystem_remove_listener", 00:03:48.876 "nvmf_subsystem_add_listener", 00:03:48.876 "nvmf_delete_subsystem", 00:03:48.876 "nvmf_create_subsystem", 00:03:48.876 "nvmf_get_subsystems", 00:03:48.876 "env_dpdk_get_mem_stats", 00:03:48.876 "nbd_get_disks", 00:03:48.876 "nbd_stop_disk", 00:03:48.877 "nbd_start_disk", 00:03:48.877 "ublk_recover_disk", 00:03:48.877 "ublk_get_disks", 00:03:48.877 "ublk_stop_disk", 00:03:48.877 "ublk_start_disk", 00:03:48.877 "ublk_destroy_target", 00:03:48.877 "ublk_create_target", 00:03:48.877 "virtio_blk_create_transport", 00:03:48.877 "virtio_blk_get_transports", 00:03:48.877 "vhost_controller_set_coalescing", 00:03:48.877 "vhost_get_controllers", 00:03:48.877 "vhost_delete_controller", 00:03:48.877 "vhost_create_blk_controller", 00:03:48.877 "vhost_scsi_controller_remove_target", 00:03:48.877 "vhost_scsi_controller_add_target", 00:03:48.877 "vhost_start_scsi_controller", 00:03:48.877 "vhost_create_scsi_controller", 00:03:48.877 "thread_set_cpumask", 00:03:48.877 "framework_get_scheduler", 00:03:48.877 "framework_set_scheduler", 00:03:48.877 "framework_get_reactors", 00:03:48.877 "thread_get_io_channels", 00:03:48.877 "thread_get_pollers", 00:03:48.877 "thread_get_stats", 00:03:48.877 "framework_monitor_context_switch", 00:03:48.877 "spdk_kill_instance", 00:03:48.877 "log_enable_timestamps", 00:03:48.877 "log_get_flags", 00:03:48.877 "log_clear_flag", 00:03:48.877 "log_set_flag", 00:03:48.877 "log_get_level", 00:03:48.877 "log_set_level", 00:03:48.877 "log_get_print_level", 00:03:48.877 "log_set_print_level", 00:03:48.877 "framework_enable_cpumask_locks", 00:03:48.877 "framework_disable_cpumask_locks", 00:03:48.877 "framework_wait_init", 00:03:48.877 "framework_start_init", 00:03:48.877 "scsi_get_devices", 00:03:48.877 "bdev_get_histogram", 00:03:48.877 "bdev_enable_histogram", 00:03:48.877 "bdev_set_qos_limit", 00:03:48.877 "bdev_set_qd_sampling_period", 00:03:48.877 "bdev_get_bdevs", 00:03:48.877 "bdev_reset_iostat", 00:03:48.877 "bdev_get_iostat", 00:03:48.877 "bdev_examine", 00:03:48.877 "bdev_wait_for_examine", 00:03:48.877 "bdev_set_options", 00:03:48.877 "notify_get_notifications", 00:03:48.877 "notify_get_types", 00:03:48.877 "accel_get_stats", 00:03:48.877 "accel_set_options", 00:03:48.877 "accel_set_driver", 00:03:48.877 "accel_crypto_key_destroy", 00:03:48.877 "accel_crypto_keys_get", 00:03:48.877 "accel_crypto_key_create", 00:03:48.877 "accel_assign_opc", 00:03:48.877 "accel_get_module_info", 00:03:48.877 "accel_get_opc_assignments", 00:03:48.877 "vmd_rescan", 00:03:48.877 "vmd_remove_device", 00:03:48.877 "vmd_enable", 00:03:48.877 "sock_set_default_impl", 00:03:48.877 "sock_impl_set_options", 00:03:48.877 "sock_impl_get_options", 00:03:48.877 "iobuf_get_stats", 00:03:48.877 "iobuf_set_options", 00:03:48.877 "framework_get_pci_devices", 00:03:48.877 "framework_get_config", 00:03:48.877 "framework_get_subsystems", 00:03:48.877 "vfu_tgt_set_base_path", 00:03:48.877 "trace_get_info", 00:03:48.877 "trace_get_tpoint_group_mask", 00:03:48.877 "trace_disable_tpoint_group", 00:03:48.877 "trace_enable_tpoint_group", 00:03:48.877 "trace_clear_tpoint_mask", 00:03:48.877 "trace_set_tpoint_mask", 00:03:48.877 "spdk_get_version", 00:03:48.877 "rpc_get_methods" 00:03:48.877 ] 00:03:48.877 07:31:14 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:48.877 07:31:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:48.877 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:03:48.877 07:31:14 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:48.877 07:31:14 -- spdkcli/tcp.sh@38 -- # killprocess 54540 00:03:48.877 07:31:14 -- common/autotest_common.sh@936 -- # '[' -z 54540 ']' 00:03:48.877 07:31:14 -- common/autotest_common.sh@940 -- # kill -0 54540 00:03:48.877 07:31:14 -- common/autotest_common.sh@941 -- # uname 00:03:48.877 07:31:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:48.877 07:31:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54540 00:03:48.877 07:31:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:48.877 killing process with pid 54540 00:03:48.877 07:31:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:48.877 07:31:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54540' 00:03:48.877 07:31:14 -- common/autotest_common.sh@955 -- # kill 54540 00:03:48.877 07:31:14 -- common/autotest_common.sh@960 -- # wait 54540 00:03:49.136 00:03:49.136 real 0m1.839s 00:03:49.136 user 0m3.597s 00:03:49.136 sys 0m0.361s 00:03:49.136 ************************************ 00:03:49.136 END TEST spdkcli_tcp 00:03:49.136 ************************************ 00:03:49.136 07:31:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:49.136 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:03:49.136 07:31:14 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:49.136 07:31:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.136 07:31:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.136 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:03:49.136 ************************************ 00:03:49.136 START TEST dpdk_mem_utility 00:03:49.136 ************************************ 00:03:49.136 07:31:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:49.395 * Looking for test storage... 00:03:49.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:03:49.395 07:31:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:49.395 07:31:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:49.395 07:31:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:49.395 07:31:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:49.395 07:31:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:49.395 07:31:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:49.395 07:31:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:49.395 07:31:14 -- scripts/common.sh@335 -- # IFS=.-: 00:03:49.395 07:31:14 -- scripts/common.sh@335 -- # read -ra ver1 00:03:49.395 07:31:14 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.395 07:31:14 -- scripts/common.sh@336 -- # read -ra ver2 00:03:49.395 07:31:14 -- scripts/common.sh@337 -- # local 'op=<' 00:03:49.395 07:31:14 -- scripts/common.sh@339 -- # ver1_l=2 00:03:49.395 07:31:14 -- scripts/common.sh@340 -- # ver2_l=1 00:03:49.395 07:31:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:49.395 07:31:14 -- scripts/common.sh@343 -- # case "$op" in 00:03:49.395 07:31:14 -- scripts/common.sh@344 -- # : 1 00:03:49.395 07:31:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:49.395 07:31:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.395 07:31:14 -- scripts/common.sh@364 -- # decimal 1 00:03:49.395 07:31:14 -- scripts/common.sh@352 -- # local d=1 00:03:49.395 07:31:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.395 07:31:14 -- scripts/common.sh@354 -- # echo 1 00:03:49.395 07:31:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:49.395 07:31:14 -- scripts/common.sh@365 -- # decimal 2 00:03:49.395 07:31:14 -- scripts/common.sh@352 -- # local d=2 00:03:49.395 07:31:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.395 07:31:14 -- scripts/common.sh@354 -- # echo 2 00:03:49.395 07:31:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:49.395 07:31:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:49.395 07:31:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:49.395 07:31:14 -- scripts/common.sh@367 -- # return 0 00:03:49.395 07:31:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.395 07:31:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.395 --rc genhtml_branch_coverage=1 00:03:49.395 --rc genhtml_function_coverage=1 00:03:49.395 --rc genhtml_legend=1 00:03:49.395 --rc geninfo_all_blocks=1 00:03:49.395 --rc geninfo_unexecuted_blocks=1 00:03:49.395 00:03:49.395 ' 00:03:49.395 07:31:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.395 --rc genhtml_branch_coverage=1 00:03:49.395 --rc genhtml_function_coverage=1 00:03:49.395 --rc genhtml_legend=1 00:03:49.395 --rc geninfo_all_blocks=1 00:03:49.395 --rc geninfo_unexecuted_blocks=1 00:03:49.395 00:03:49.395 ' 00:03:49.395 07:31:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.395 --rc genhtml_branch_coverage=1 00:03:49.395 --rc genhtml_function_coverage=1 00:03:49.395 --rc genhtml_legend=1 00:03:49.395 --rc geninfo_all_blocks=1 00:03:49.395 --rc geninfo_unexecuted_blocks=1 00:03:49.395 00:03:49.395 ' 00:03:49.395 07:31:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:49.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.396 --rc genhtml_branch_coverage=1 00:03:49.396 --rc genhtml_function_coverage=1 00:03:49.396 --rc genhtml_legend=1 00:03:49.396 --rc geninfo_all_blocks=1 00:03:49.396 --rc geninfo_unexecuted_blocks=1 00:03:49.396 00:03:49.396 ' 00:03:49.396 07:31:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:03:49.396 07:31:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54633 00:03:49.396 07:31:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54633 00:03:49.396 07:31:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:49.396 07:31:14 -- common/autotest_common.sh@829 -- # '[' -z 54633 ']' 00:03:49.396 07:31:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.396 07:31:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:49.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.396 07:31:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.396 07:31:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:49.396 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:03:49.396 [2024-12-02 07:31:14.961608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:49.396 [2024-12-02 07:31:14.961704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54633 ] 00:03:49.654 [2024-12-02 07:31:15.089402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.654 [2024-12-02 07:31:15.141613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:49.654 [2024-12-02 07:31:15.141751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.591 07:31:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:50.591 07:31:15 -- common/autotest_common.sh@862 -- # return 0 00:03:50.591 07:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:50.591 07:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:50.591 07:31:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.591 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:03:50.591 { 00:03:50.591 "filename": "/tmp/spdk_mem_dump.txt" 00:03:50.591 } 00:03:50.591 07:31:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.591 07:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:03:50.591 DPDK memory size 814.000000 MiB in 1 heap(s) 00:03:50.591 1 heaps totaling size 814.000000 MiB 00:03:50.591 size: 814.000000 MiB heap id: 0 00:03:50.591 end heaps---------- 00:03:50.591 8 mempools totaling size 598.116089 MiB 00:03:50.591 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:50.591 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:50.591 size: 84.521057 MiB name: bdev_io_54633 00:03:50.591 size: 51.011292 MiB name: evtpool_54633 00:03:50.591 size: 50.003479 MiB name: msgpool_54633 00:03:50.591 size: 21.763794 MiB name: PDU_Pool 00:03:50.591 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:50.591 size: 0.026123 MiB name: Session_Pool 00:03:50.591 end mempools------- 00:03:50.591 6 memzones totaling size 4.142822 MiB 00:03:50.591 size: 1.000366 MiB name: RG_ring_0_54633 00:03:50.591 size: 1.000366 MiB name: RG_ring_1_54633 00:03:50.591 size: 1.000366 MiB name: RG_ring_4_54633 00:03:50.591 size: 1.000366 MiB name: RG_ring_5_54633 00:03:50.591 size: 0.125366 MiB name: RG_ring_2_54633 00:03:50.591 size: 0.015991 MiB name: RG_ring_3_54633 00:03:50.591 end memzones------- 00:03:50.591 07:31:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:03:50.591 heap id: 0 total size: 814.000000 MiB number of busy elements: 311 number of free elements: 15 00:03:50.591 list of free elements. size: 12.469910 MiB 00:03:50.591 element at address: 0x200000400000 with size: 1.999512 MiB 00:03:50.591 element at address: 0x200018e00000 with size: 0.999878 MiB 00:03:50.591 element at address: 0x200019000000 with size: 0.999878 MiB 00:03:50.591 element at address: 0x200003e00000 with size: 0.996277 MiB 00:03:50.591 element at address: 0x200031c00000 with size: 0.994446 MiB 00:03:50.592 element at address: 0x200013800000 with size: 0.978699 MiB 00:03:50.592 element at address: 0x200007000000 with size: 0.959839 MiB 00:03:50.592 element at address: 0x200019200000 with size: 0.936584 MiB 00:03:50.592 element at address: 0x200000200000 with size: 0.832825 MiB 00:03:50.592 element at address: 0x20001aa00000 with size: 0.567688 MiB 00:03:50.592 element at address: 0x20000b200000 with size: 0.488892 MiB 00:03:50.592 element at address: 0x200000800000 with size: 0.486145 MiB 00:03:50.592 element at address: 0x200019400000 with size: 0.485657 MiB 00:03:50.592 element at address: 0x200027e00000 with size: 0.395752 MiB 00:03:50.592 element at address: 0x200003a00000 with size: 0.347839 MiB 00:03:50.592 list of standard malloc elements. size: 199.267517 MiB 00:03:50.592 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:03:50.592 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:03:50.592 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:50.592 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:03:50.592 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:03:50.592 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:50.592 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:03:50.592 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:50.592 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:03:50.592 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087c740 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087c800 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087c980 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59180 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59240 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59300 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59480 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59540 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59600 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59780 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59840 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59900 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:03:50.592 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003adb300 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003adb500 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003affa80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003affb40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:03:50.593 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:03:50.594 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:03:50.594 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e65500 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:03:50.594 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:03:50.594 list of memzone associated elements. size: 602.262573 MiB 00:03:50.594 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:03:50.594 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:50.594 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:03:50.594 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:50.594 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:03:50.594 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54633_0 00:03:50.594 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:03:50.594 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54633_0 00:03:50.594 element at address: 0x200003fff380 with size: 48.003052 MiB 00:03:50.594 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54633_0 00:03:50.594 element at address: 0x2000195be940 with size: 20.255554 MiB 00:03:50.594 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:50.594 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:03:50.594 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:50.594 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:03:50.594 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54633 00:03:50.594 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:03:50.594 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54633 00:03:50.594 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:50.594 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54633 00:03:50.594 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:03:50.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:50.594 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:03:50.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:50.594 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:03:50.594 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:50.595 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:03:50.595 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:50.595 element at address: 0x200003eff180 with size: 1.000488 MiB 00:03:50.595 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54633 00:03:50.595 element at address: 0x200003affc00 with size: 1.000488 MiB 00:03:50.595 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54633 00:03:50.595 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:03:50.595 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54633 00:03:50.595 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:03:50.595 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54633 00:03:50.595 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:03:50.595 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54633 00:03:50.595 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:03:50.595 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:50.595 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:03:50.595 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:50.595 element at address: 0x20001947c540 with size: 0.250488 MiB 00:03:50.595 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:50.595 element at address: 0x200003adf880 with size: 0.125488 MiB 00:03:50.595 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54633 00:03:50.595 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:03:50.595 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:50.595 element at address: 0x200027e65680 with size: 0.023743 MiB 00:03:50.595 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:50.595 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:03:50.595 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54633 00:03:50.595 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:03:50.595 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:50.595 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:03:50.595 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54633 00:03:50.595 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:03:50.595 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54633 00:03:50.595 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:03:50.595 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:50.595 07:31:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:50.595 07:31:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54633 00:03:50.595 07:31:16 -- common/autotest_common.sh@936 -- # '[' -z 54633 ']' 00:03:50.595 07:31:16 -- common/autotest_common.sh@940 -- # kill -0 54633 00:03:50.595 07:31:16 -- common/autotest_common.sh@941 -- # uname 00:03:50.595 07:31:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:50.595 07:31:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54633 00:03:50.595 07:31:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:50.595 07:31:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:50.595 killing process with pid 54633 00:03:50.595 07:31:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54633' 00:03:50.595 07:31:16 -- common/autotest_common.sh@955 -- # kill 54633 00:03:50.595 07:31:16 -- common/autotest_common.sh@960 -- # wait 54633 00:03:50.855 00:03:50.855 real 0m1.604s 00:03:50.855 user 0m1.863s 00:03:50.855 sys 0m0.318s 00:03:50.855 07:31:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:50.855 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:03:50.855 ************************************ 00:03:50.855 END TEST dpdk_mem_utility 00:03:50.855 ************************************ 00:03:50.855 07:31:16 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:03:50.855 07:31:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.855 07:31:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.855 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:03:50.855 ************************************ 00:03:50.855 START TEST event 00:03:50.855 ************************************ 00:03:50.855 07:31:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:03:50.855 * Looking for test storage... 00:03:51.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:03:51.115 07:31:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:51.115 07:31:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:51.115 07:31:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:51.115 07:31:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:51.115 07:31:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:51.115 07:31:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:51.115 07:31:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:51.115 07:31:16 -- scripts/common.sh@335 -- # IFS=.-: 00:03:51.115 07:31:16 -- scripts/common.sh@335 -- # read -ra ver1 00:03:51.115 07:31:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.115 07:31:16 -- scripts/common.sh@336 -- # read -ra ver2 00:03:51.115 07:31:16 -- scripts/common.sh@337 -- # local 'op=<' 00:03:51.115 07:31:16 -- scripts/common.sh@339 -- # ver1_l=2 00:03:51.115 07:31:16 -- scripts/common.sh@340 -- # ver2_l=1 00:03:51.115 07:31:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:51.115 07:31:16 -- scripts/common.sh@343 -- # case "$op" in 00:03:51.115 07:31:16 -- scripts/common.sh@344 -- # : 1 00:03:51.115 07:31:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:51.115 07:31:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.115 07:31:16 -- scripts/common.sh@364 -- # decimal 1 00:03:51.115 07:31:16 -- scripts/common.sh@352 -- # local d=1 00:03:51.115 07:31:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.115 07:31:16 -- scripts/common.sh@354 -- # echo 1 00:03:51.115 07:31:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:51.115 07:31:16 -- scripts/common.sh@365 -- # decimal 2 00:03:51.115 07:31:16 -- scripts/common.sh@352 -- # local d=2 00:03:51.115 07:31:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.115 07:31:16 -- scripts/common.sh@354 -- # echo 2 00:03:51.115 07:31:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:51.115 07:31:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:51.115 07:31:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:51.115 07:31:16 -- scripts/common.sh@367 -- # return 0 00:03:51.115 07:31:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.115 07:31:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:51.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.115 --rc genhtml_branch_coverage=1 00:03:51.115 --rc genhtml_function_coverage=1 00:03:51.115 --rc genhtml_legend=1 00:03:51.115 --rc geninfo_all_blocks=1 00:03:51.115 --rc geninfo_unexecuted_blocks=1 00:03:51.115 00:03:51.115 ' 00:03:51.115 07:31:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:51.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.115 --rc genhtml_branch_coverage=1 00:03:51.115 --rc genhtml_function_coverage=1 00:03:51.115 --rc genhtml_legend=1 00:03:51.115 --rc geninfo_all_blocks=1 00:03:51.115 --rc geninfo_unexecuted_blocks=1 00:03:51.115 00:03:51.115 ' 00:03:51.115 07:31:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:51.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.115 --rc genhtml_branch_coverage=1 00:03:51.115 --rc genhtml_function_coverage=1 00:03:51.115 --rc genhtml_legend=1 00:03:51.115 --rc geninfo_all_blocks=1 00:03:51.115 --rc geninfo_unexecuted_blocks=1 00:03:51.115 00:03:51.115 ' 00:03:51.115 07:31:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:51.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.115 --rc genhtml_branch_coverage=1 00:03:51.115 --rc genhtml_function_coverage=1 00:03:51.115 --rc genhtml_legend=1 00:03:51.115 --rc geninfo_all_blocks=1 00:03:51.115 --rc geninfo_unexecuted_blocks=1 00:03:51.115 00:03:51.115 ' 00:03:51.115 07:31:16 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:03:51.115 07:31:16 -- bdev/nbd_common.sh@6 -- # set -e 00:03:51.115 07:31:16 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:51.115 07:31:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:03:51.115 07:31:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.115 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:03:51.115 ************************************ 00:03:51.115 START TEST event_perf 00:03:51.115 ************************************ 00:03:51.115 07:31:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:51.115 Running I/O for 1 seconds...[2024-12-02 07:31:16.585117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:51.115 [2024-12-02 07:31:16.585192] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54717 ] 00:03:51.115 [2024-12-02 07:31:16.715271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:51.374 [2024-12-02 07:31:16.765332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:51.374 [2024-12-02 07:31:16.765446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:03:51.374 [2024-12-02 07:31:16.765576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.374 Running I/O for 1 seconds...[2024-12-02 07:31:16.765576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:03:52.317 00:03:52.317 lcore 0: 208248 00:03:52.317 lcore 1: 208249 00:03:52.317 lcore 2: 208249 00:03:52.317 lcore 3: 208248 00:03:52.317 done. 00:03:52.317 00:03:52.317 real 0m1.272s 00:03:52.317 user 0m4.109s 00:03:52.317 sys 0m0.045s 00:03:52.317 07:31:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:52.317 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:03:52.317 ************************************ 00:03:52.317 END TEST event_perf 00:03:52.317 ************************************ 00:03:52.317 07:31:17 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:03:52.317 07:31:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:03:52.317 07:31:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.317 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:03:52.317 ************************************ 00:03:52.317 START TEST event_reactor 00:03:52.317 ************************************ 00:03:52.317 07:31:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:03:52.318 [2024-12-02 07:31:17.908607] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:52.318 [2024-12-02 07:31:17.908705] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54750 ] 00:03:52.580 [2024-12-02 07:31:18.037853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.580 [2024-12-02 07:31:18.085783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.951 test_start 00:03:53.951 oneshot 00:03:53.951 tick 100 00:03:53.951 tick 100 00:03:53.951 tick 250 00:03:53.951 tick 100 00:03:53.951 tick 100 00:03:53.951 tick 250 00:03:53.951 tick 500 00:03:53.951 tick 100 00:03:53.951 tick 100 00:03:53.951 tick 100 00:03:53.951 tick 250 00:03:53.951 tick 100 00:03:53.951 tick 100 00:03:53.951 test_end 00:03:53.951 00:03:53.951 real 0m1.270s 00:03:53.951 user 0m1.129s 00:03:53.951 sys 0m0.036s 00:03:53.951 07:31:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:53.951 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:03:53.951 ************************************ 00:03:53.951 END TEST event_reactor 00:03:53.951 ************************************ 00:03:53.951 07:31:19 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:53.951 07:31:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:03:53.951 07:31:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.951 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:03:53.951 ************************************ 00:03:53.951 START TEST event_reactor_perf 00:03:53.951 ************************************ 00:03:53.951 07:31:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:53.951 [2024-12-02 07:31:19.227023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:53.951 [2024-12-02 07:31:19.227109] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54790 ] 00:03:53.951 [2024-12-02 07:31:19.360595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.951 [2024-12-02 07:31:19.409651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.905 test_start 00:03:54.905 test_end 00:03:54.905 Performance: 465564 events per second 00:03:54.905 00:03:54.905 real 0m1.276s 00:03:54.905 user 0m1.130s 00:03:54.905 sys 0m0.041s 00:03:54.905 07:31:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:54.905 07:31:20 -- common/autotest_common.sh@10 -- # set +x 00:03:54.905 ************************************ 00:03:54.905 END TEST event_reactor_perf 00:03:54.905 ************************************ 00:03:55.162 07:31:20 -- event/event.sh@49 -- # uname -s 00:03:55.162 07:31:20 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:55.162 07:31:20 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:03:55.162 07:31:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.162 07:31:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.162 07:31:20 -- common/autotest_common.sh@10 -- # set +x 00:03:55.162 ************************************ 00:03:55.162 START TEST event_scheduler 00:03:55.162 ************************************ 00:03:55.162 07:31:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:03:55.162 * Looking for test storage... 00:03:55.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:03:55.162 07:31:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:55.162 07:31:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:55.162 07:31:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:55.162 07:31:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:55.162 07:31:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:55.162 07:31:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:55.162 07:31:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:55.162 07:31:20 -- scripts/common.sh@335 -- # IFS=.-: 00:03:55.163 07:31:20 -- scripts/common.sh@335 -- # read -ra ver1 00:03:55.163 07:31:20 -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.163 07:31:20 -- scripts/common.sh@336 -- # read -ra ver2 00:03:55.163 07:31:20 -- scripts/common.sh@337 -- # local 'op=<' 00:03:55.163 07:31:20 -- scripts/common.sh@339 -- # ver1_l=2 00:03:55.163 07:31:20 -- scripts/common.sh@340 -- # ver2_l=1 00:03:55.163 07:31:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:55.163 07:31:20 -- scripts/common.sh@343 -- # case "$op" in 00:03:55.163 07:31:20 -- scripts/common.sh@344 -- # : 1 00:03:55.163 07:31:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:55.163 07:31:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.163 07:31:20 -- scripts/common.sh@364 -- # decimal 1 00:03:55.163 07:31:20 -- scripts/common.sh@352 -- # local d=1 00:03:55.163 07:31:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.163 07:31:20 -- scripts/common.sh@354 -- # echo 1 00:03:55.163 07:31:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:55.163 07:31:20 -- scripts/common.sh@365 -- # decimal 2 00:03:55.163 07:31:20 -- scripts/common.sh@352 -- # local d=2 00:03:55.163 07:31:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.163 07:31:20 -- scripts/common.sh@354 -- # echo 2 00:03:55.163 07:31:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:55.163 07:31:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:55.163 07:31:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:55.163 07:31:20 -- scripts/common.sh@367 -- # return 0 00:03:55.163 07:31:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.163 07:31:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.163 --rc genhtml_branch_coverage=1 00:03:55.163 --rc genhtml_function_coverage=1 00:03:55.163 --rc genhtml_legend=1 00:03:55.163 --rc geninfo_all_blocks=1 00:03:55.163 --rc geninfo_unexecuted_blocks=1 00:03:55.163 00:03:55.163 ' 00:03:55.163 07:31:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.163 --rc genhtml_branch_coverage=1 00:03:55.163 --rc genhtml_function_coverage=1 00:03:55.163 --rc genhtml_legend=1 00:03:55.163 --rc geninfo_all_blocks=1 00:03:55.163 --rc geninfo_unexecuted_blocks=1 00:03:55.163 00:03:55.163 ' 00:03:55.163 07:31:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.163 --rc genhtml_branch_coverage=1 00:03:55.163 --rc genhtml_function_coverage=1 00:03:55.163 --rc genhtml_legend=1 00:03:55.163 --rc geninfo_all_blocks=1 00:03:55.163 --rc geninfo_unexecuted_blocks=1 00:03:55.163 00:03:55.163 ' 00:03:55.163 07:31:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.163 --rc genhtml_branch_coverage=1 00:03:55.163 --rc genhtml_function_coverage=1 00:03:55.163 --rc genhtml_legend=1 00:03:55.163 --rc geninfo_all_blocks=1 00:03:55.163 --rc geninfo_unexecuted_blocks=1 00:03:55.163 00:03:55.163 ' 00:03:55.163 07:31:20 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:55.163 07:31:20 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54854 00:03:55.163 07:31:20 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.163 07:31:20 -- scheduler/scheduler.sh@37 -- # waitforlisten 54854 00:03:55.163 07:31:20 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:55.163 07:31:20 -- common/autotest_common.sh@829 -- # '[' -z 54854 ']' 00:03:55.163 07:31:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.163 07:31:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.163 07:31:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.163 07:31:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.163 07:31:20 -- common/autotest_common.sh@10 -- # set +x 00:03:55.163 [2024-12-02 07:31:20.768502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:55.163 [2024-12-02 07:31:20.768621] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54854 ] 00:03:55.420 [2024-12-02 07:31:20.909371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:55.420 [2024-12-02 07:31:20.981391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.420 [2024-12-02 07:31:20.981513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:55.420 [2024-12-02 07:31:20.981889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:03:55.420 [2024-12-02 07:31:20.982280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:03:56.355 07:31:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.355 07:31:21 -- common/autotest_common.sh@862 -- # return 0 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 POWER: Env isn't set yet! 00:03:56.355 POWER: Attempting to initialise ACPI cpufreq power management... 00:03:56.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:03:56.355 POWER: Cannot set governor of lcore 0 to userspace 00:03:56.355 POWER: Attempting to initialise PSTAT power management... 00:03:56.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:03:56.355 POWER: Cannot set governor of lcore 0 to performance 00:03:56.355 POWER: Attempting to initialise AMD PSTATE power management... 00:03:56.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:03:56.355 POWER: Cannot set governor of lcore 0 to userspace 00:03:56.355 POWER: Attempting to initialise CPPC power management... 00:03:56.355 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:03:56.355 POWER: Cannot set governor of lcore 0 to userspace 00:03:56.355 POWER: Attempting to initialise VM power management... 00:03:56.355 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:03:56.355 POWER: Unable to set Power Management Environment for lcore 0 00:03:56.355 [2024-12-02 07:31:21.691644] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:03:56.355 [2024-12-02 07:31:21.691656] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:03:56.355 [2024-12-02 07:31:21.691664] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:03:56.355 [2024-12-02 07:31:21.691691] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:56.355 [2024-12-02 07:31:21.691698] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:56.355 [2024-12-02 07:31:21.691704] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 [2024-12-02 07:31:21.742875] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:56.355 07:31:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.355 07:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 ************************************ 00:03:56.355 START TEST scheduler_create_thread 00:03:56.355 ************************************ 00:03:56.355 07:31:21 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 2 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 3 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 4 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 5 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 6 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 7 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 8 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 9 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 10 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.355 07:31:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.355 07:31:21 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:56.355 07:31:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.355 07:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:56.937 07:31:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.937 07:31:22 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:56.937 07:31:22 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:56.937 07:31:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.937 07:31:22 -- common/autotest_common.sh@10 -- # set +x 00:03:57.908 ************************************ 00:03:57.908 END TEST scheduler_create_thread 00:03:57.908 ************************************ 00:03:57.908 07:31:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.908 00:03:57.908 real 0m1.750s 00:03:57.908 user 0m0.017s 00:03:57.908 sys 0m0.006s 00:03:57.908 07:31:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.908 07:31:23 -- common/autotest_common.sh@10 -- # set +x 00:03:58.167 07:31:23 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:03:58.167 07:31:23 -- scheduler/scheduler.sh@46 -- # killprocess 54854 00:03:58.167 07:31:23 -- common/autotest_common.sh@936 -- # '[' -z 54854 ']' 00:03:58.167 07:31:23 -- common/autotest_common.sh@940 -- # kill -0 54854 00:03:58.167 07:31:23 -- common/autotest_common.sh@941 -- # uname 00:03:58.167 07:31:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:58.167 07:31:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54854 00:03:58.167 killing process with pid 54854 00:03:58.167 07:31:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:03:58.167 07:31:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:03:58.167 07:31:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54854' 00:03:58.167 07:31:23 -- common/autotest_common.sh@955 -- # kill 54854 00:03:58.167 07:31:23 -- common/autotest_common.sh@960 -- # wait 54854 00:03:58.427 [2024-12-02 07:31:23.984996] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:03:58.686 ************************************ 00:03:58.686 END TEST event_scheduler 00:03:58.686 ************************************ 00:03:58.687 00:03:58.687 real 0m3.604s 00:03:58.687 user 0m6.435s 00:03:58.687 sys 0m0.308s 00:03:58.687 07:31:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.687 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:03:58.687 07:31:24 -- event/event.sh@51 -- # modprobe -n nbd 00:03:58.687 07:31:24 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:03:58.687 07:31:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.687 07:31:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.687 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:03:58.687 ************************************ 00:03:58.687 START TEST app_repeat 00:03:58.687 ************************************ 00:03:58.687 07:31:24 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:03:58.687 07:31:24 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:58.687 07:31:24 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:58.687 07:31:24 -- event/event.sh@13 -- # local nbd_list 00:03:58.687 07:31:24 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:58.687 07:31:24 -- event/event.sh@14 -- # local bdev_list 00:03:58.687 07:31:24 -- event/event.sh@15 -- # local repeat_times=4 00:03:58.687 07:31:24 -- event/event.sh@17 -- # modprobe nbd 00:03:58.687 Process app_repeat pid: 54937 00:03:58.687 spdk_app_start Round 0 00:03:58.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:58.687 07:31:24 -- event/event.sh@19 -- # repeat_pid=54937 00:03:58.687 07:31:24 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.687 07:31:24 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54937' 00:03:58.687 07:31:24 -- event/event.sh@23 -- # for i in {0..2} 00:03:58.687 07:31:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:03:58.687 07:31:24 -- event/event.sh@25 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:03:58.687 07:31:24 -- common/autotest_common.sh@829 -- # '[' -z 54937 ']' 00:03:58.687 07:31:24 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:03:58.687 07:31:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:58.687 07:31:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:58.687 07:31:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:58.687 07:31:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:58.687 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:03:58.687 [2024-12-02 07:31:24.237963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:58.687 [2024-12-02 07:31:24.238234] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54937 ] 00:03:58.947 [2024-12-02 07:31:24.370820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:58.947 [2024-12-02 07:31:24.424135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:58.947 [2024-12-02 07:31:24.424144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.883 07:31:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:59.883 07:31:25 -- common/autotest_common.sh@862 -- # return 0 00:03:59.883 07:31:25 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:59.883 Malloc0 00:03:59.883 07:31:25 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:00.141 Malloc1 00:04:00.141 07:31:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@12 -- # local i 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:00.141 07:31:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:00.398 /dev/nbd0 00:04:00.398 07:31:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:00.398 07:31:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:00.398 07:31:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:00.398 07:31:25 -- common/autotest_common.sh@867 -- # local i 00:04:00.398 07:31:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:00.398 07:31:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:00.398 07:31:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:00.398 07:31:25 -- common/autotest_common.sh@871 -- # break 00:04:00.398 07:31:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:00.398 07:31:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:00.398 07:31:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:00.398 1+0 records in 00:04:00.398 1+0 records out 00:04:00.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302154 s, 13.6 MB/s 00:04:00.398 07:31:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:00.398 07:31:25 -- common/autotest_common.sh@884 -- # size=4096 00:04:00.398 07:31:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:00.398 07:31:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:00.398 07:31:25 -- common/autotest_common.sh@887 -- # return 0 00:04:00.398 07:31:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:00.398 07:31:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:00.398 07:31:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:00.655 /dev/nbd1 00:04:00.655 07:31:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:00.655 07:31:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:00.655 07:31:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:00.655 07:31:26 -- common/autotest_common.sh@867 -- # local i 00:04:00.655 07:31:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:00.655 07:31:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:00.655 07:31:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:00.655 07:31:26 -- common/autotest_common.sh@871 -- # break 00:04:00.655 07:31:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:00.655 07:31:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:00.655 07:31:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:00.655 1+0 records in 00:04:00.655 1+0 records out 00:04:00.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308523 s, 13.3 MB/s 00:04:00.655 07:31:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:00.655 07:31:26 -- common/autotest_common.sh@884 -- # size=4096 00:04:00.655 07:31:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:00.655 07:31:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:00.655 07:31:26 -- common/autotest_common.sh@887 -- # return 0 00:04:00.655 07:31:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:00.655 07:31:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:00.655 07:31:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:00.655 07:31:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:00.655 07:31:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:00.913 { 00:04:00.913 "nbd_device": "/dev/nbd0", 00:04:00.913 "bdev_name": "Malloc0" 00:04:00.913 }, 00:04:00.913 { 00:04:00.913 "nbd_device": "/dev/nbd1", 00:04:00.913 "bdev_name": "Malloc1" 00:04:00.913 } 00:04:00.913 ]' 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:00.913 { 00:04:00.913 "nbd_device": "/dev/nbd0", 00:04:00.913 "bdev_name": "Malloc0" 00:04:00.913 }, 00:04:00.913 { 00:04:00.913 "nbd_device": "/dev/nbd1", 00:04:00.913 "bdev_name": "Malloc1" 00:04:00.913 } 00:04:00.913 ]' 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:00.913 /dev/nbd1' 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:00.913 /dev/nbd1' 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@65 -- # count=2 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@95 -- # count=2 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:00.913 256+0 records in 00:04:00.913 256+0 records out 00:04:00.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653153 s, 161 MB/s 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:00.913 07:31:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:01.172 256+0 records in 00:04:01.172 256+0 records out 00:04:01.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241068 s, 43.5 MB/s 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:01.172 256+0 records in 00:04:01.172 256+0 records out 00:04:01.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276831 s, 37.9 MB/s 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@51 -- # local i 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:01.172 07:31:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:01.431 07:31:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@41 -- # break 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@45 -- # return 0 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:01.432 07:31:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@41 -- # break 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:01.432 07:31:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:01.998 07:31:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@65 -- # true 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@65 -- # count=0 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@104 -- # count=0 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:01.999 07:31:27 -- bdev/nbd_common.sh@109 -- # return 0 00:04:01.999 07:31:27 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:01.999 07:31:27 -- event/event.sh@35 -- # sleep 3 00:04:02.257 [2024-12-02 07:31:27.746033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:02.257 [2024-12-02 07:31:27.792191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.257 [2024-12-02 07:31:27.792199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.257 [2024-12-02 07:31:27.819205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:02.257 [2024-12-02 07:31:27.819277] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:05.542 07:31:30 -- event/event.sh@23 -- # for i in {0..2} 00:04:05.542 spdk_app_start Round 1 00:04:05.542 07:31:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:05.542 07:31:30 -- event/event.sh@25 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:04:05.542 07:31:30 -- common/autotest_common.sh@829 -- # '[' -z 54937 ']' 00:04:05.542 07:31:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:05.542 07:31:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:05.542 07:31:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:05.542 07:31:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.542 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:04:05.542 07:31:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.542 07:31:30 -- common/autotest_common.sh@862 -- # return 0 00:04:05.542 07:31:30 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:05.542 Malloc0 00:04:05.542 07:31:31 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:05.802 Malloc1 00:04:05.802 07:31:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@12 -- # local i 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:05.802 07:31:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:06.061 /dev/nbd0 00:04:06.061 07:31:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:06.061 07:31:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:06.061 07:31:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:06.061 07:31:31 -- common/autotest_common.sh@867 -- # local i 00:04:06.061 07:31:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:06.061 07:31:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:06.061 07:31:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:06.061 07:31:31 -- common/autotest_common.sh@871 -- # break 00:04:06.061 07:31:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:06.061 07:31:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:06.061 07:31:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:06.320 1+0 records in 00:04:06.320 1+0 records out 00:04:06.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025894 s, 15.8 MB/s 00:04:06.320 07:31:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:06.320 07:31:31 -- common/autotest_common.sh@884 -- # size=4096 00:04:06.320 07:31:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:06.320 07:31:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:06.320 07:31:31 -- common/autotest_common.sh@887 -- # return 0 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:06.320 /dev/nbd1 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:06.320 07:31:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:06.320 07:31:31 -- common/autotest_common.sh@867 -- # local i 00:04:06.320 07:31:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:06.320 07:31:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:06.320 07:31:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:06.320 07:31:31 -- common/autotest_common.sh@871 -- # break 00:04:06.320 07:31:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:06.320 07:31:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:06.320 07:31:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:06.320 1+0 records in 00:04:06.320 1+0 records out 00:04:06.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027624 s, 14.8 MB/s 00:04:06.320 07:31:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:06.320 07:31:31 -- common/autotest_common.sh@884 -- # size=4096 00:04:06.320 07:31:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:06.320 07:31:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:06.320 07:31:31 -- common/autotest_common.sh@887 -- # return 0 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.320 07:31:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:06.578 07:31:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:06.578 { 00:04:06.578 "nbd_device": "/dev/nbd0", 00:04:06.578 "bdev_name": "Malloc0" 00:04:06.578 }, 00:04:06.578 { 00:04:06.578 "nbd_device": "/dev/nbd1", 00:04:06.578 "bdev_name": "Malloc1" 00:04:06.578 } 00:04:06.578 ]' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:06.838 { 00:04:06.838 "nbd_device": "/dev/nbd0", 00:04:06.838 "bdev_name": "Malloc0" 00:04:06.838 }, 00:04:06.838 { 00:04:06.838 "nbd_device": "/dev/nbd1", 00:04:06.838 "bdev_name": "Malloc1" 00:04:06.838 } 00:04:06.838 ]' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:06.838 /dev/nbd1' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:06.838 /dev/nbd1' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@65 -- # count=2 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@95 -- # count=2 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:06.838 256+0 records in 00:04:06.838 256+0 records out 00:04:06.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524133 s, 200 MB/s 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:06.838 256+0 records in 00:04:06.838 256+0 records out 00:04:06.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197188 s, 53.2 MB/s 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:06.838 256+0 records in 00:04:06.838 256+0 records out 00:04:06.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209961 s, 49.9 MB/s 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@51 -- # local i 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:06.838 07:31:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:07.097 07:31:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:07.097 07:31:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:07.097 07:31:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:07.097 07:31:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:07.097 07:31:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:07.097 07:31:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:07.097 07:31:32 -- bdev/nbd_common.sh@41 -- # break 00:04:07.098 07:31:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:07.098 07:31:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:07.098 07:31:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@41 -- # break 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.357 07:31:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@65 -- # true 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@65 -- # count=0 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@104 -- # count=0 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:07.617 07:31:33 -- bdev/nbd_common.sh@109 -- # return 0 00:04:07.617 07:31:33 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:08.185 07:31:33 -- event/event.sh@35 -- # sleep 3 00:04:08.185 [2024-12-02 07:31:33.629502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:08.185 [2024-12-02 07:31:33.674693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.185 [2024-12-02 07:31:33.674698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.185 [2024-12-02 07:31:33.701541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:08.185 [2024-12-02 07:31:33.701608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:11.469 07:31:36 -- event/event.sh@23 -- # for i in {0..2} 00:04:11.469 spdk_app_start Round 2 00:04:11.469 07:31:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:11.469 07:31:36 -- event/event.sh@25 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:04:11.469 07:31:36 -- common/autotest_common.sh@829 -- # '[' -z 54937 ']' 00:04:11.469 07:31:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:11.469 07:31:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:11.469 07:31:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:11.469 07:31:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.469 07:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:11.469 07:31:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.469 07:31:36 -- common/autotest_common.sh@862 -- # return 0 00:04:11.469 07:31:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:11.469 Malloc0 00:04:11.469 07:31:36 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:11.729 Malloc1 00:04:11.729 07:31:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@12 -- # local i 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.729 07:31:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:11.987 /dev/nbd0 00:04:11.987 07:31:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:11.988 07:31:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:11.988 07:31:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:11.988 07:31:37 -- common/autotest_common.sh@867 -- # local i 00:04:11.988 07:31:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:11.988 07:31:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:11.988 07:31:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:11.988 07:31:37 -- common/autotest_common.sh@871 -- # break 00:04:11.988 07:31:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:11.988 07:31:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:11.988 07:31:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.988 1+0 records in 00:04:11.988 1+0 records out 00:04:11.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274881 s, 14.9 MB/s 00:04:11.988 07:31:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:11.988 07:31:37 -- common/autotest_common.sh@884 -- # size=4096 00:04:11.988 07:31:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:11.988 07:31:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:11.988 07:31:37 -- common/autotest_common.sh@887 -- # return 0 00:04:11.988 07:31:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.988 07:31:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.988 07:31:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:12.246 /dev/nbd1 00:04:12.247 07:31:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:12.247 07:31:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:12.247 07:31:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:12.247 07:31:37 -- common/autotest_common.sh@867 -- # local i 00:04:12.247 07:31:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:12.247 07:31:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:12.247 07:31:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:12.247 07:31:37 -- common/autotest_common.sh@871 -- # break 00:04:12.247 07:31:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:12.247 07:31:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:12.247 07:31:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.247 1+0 records in 00:04:12.247 1+0 records out 00:04:12.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287564 s, 14.2 MB/s 00:04:12.247 07:31:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:12.247 07:31:37 -- common/autotest_common.sh@884 -- # size=4096 00:04:12.247 07:31:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:12.247 07:31:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:12.247 07:31:37 -- common/autotest_common.sh@887 -- # return 0 00:04:12.247 07:31:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.247 07:31:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.247 07:31:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.247 07:31:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.247 07:31:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.506 07:31:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:12.506 { 00:04:12.506 "nbd_device": "/dev/nbd0", 00:04:12.506 "bdev_name": "Malloc0" 00:04:12.506 }, 00:04:12.506 { 00:04:12.506 "nbd_device": "/dev/nbd1", 00:04:12.506 "bdev_name": "Malloc1" 00:04:12.506 } 00:04:12.506 ]' 00:04:12.506 07:31:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:12.506 { 00:04:12.506 "nbd_device": "/dev/nbd0", 00:04:12.506 "bdev_name": "Malloc0" 00:04:12.506 }, 00:04:12.506 { 00:04:12.506 "nbd_device": "/dev/nbd1", 00:04:12.506 "bdev_name": "Malloc1" 00:04:12.506 } 00:04:12.506 ]' 00:04:12.506 07:31:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:12.506 /dev/nbd1' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:12.506 /dev/nbd1' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@65 -- # count=2 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@95 -- # count=2 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:12.506 256+0 records in 00:04:12.506 256+0 records out 00:04:12.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718034 s, 146 MB/s 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:12.506 256+0 records in 00:04:12.506 256+0 records out 00:04:12.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246726 s, 42.5 MB/s 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:12.506 256+0 records in 00:04:12.506 256+0 records out 00:04:12.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240252 s, 43.6 MB/s 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@51 -- # local i 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.506 07:31:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@41 -- # break 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.843 07:31:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@41 -- # break 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.203 07:31:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@65 -- # true 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@65 -- # count=0 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@104 -- # count=0 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:13.462 07:31:38 -- bdev/nbd_common.sh@109 -- # return 0 00:04:13.462 07:31:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:13.721 07:31:39 -- event/event.sh@35 -- # sleep 3 00:04:13.721 [2024-12-02 07:31:39.318502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.980 [2024-12-02 07:31:39.365942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.980 [2024-12-02 07:31:39.365950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.980 [2024-12-02 07:31:39.393069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:13.980 [2024-12-02 07:31:39.393135] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.269 07:31:42 -- event/event.sh@38 -- # waitforlisten 54937 /var/tmp/spdk-nbd.sock 00:04:17.269 07:31:42 -- common/autotest_common.sh@829 -- # '[' -z 54937 ']' 00:04:17.269 07:31:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.269 07:31:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.269 07:31:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.269 07:31:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.269 07:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:17.269 07:31:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:17.269 07:31:42 -- common/autotest_common.sh@862 -- # return 0 00:04:17.269 07:31:42 -- event/event.sh@39 -- # killprocess 54937 00:04:17.269 07:31:42 -- common/autotest_common.sh@936 -- # '[' -z 54937 ']' 00:04:17.269 07:31:42 -- common/autotest_common.sh@940 -- # kill -0 54937 00:04:17.269 07:31:42 -- common/autotest_common.sh@941 -- # uname 00:04:17.269 07:31:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:17.269 07:31:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54937 00:04:17.269 killing process with pid 54937 00:04:17.269 07:31:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:17.269 07:31:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:17.269 07:31:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54937' 00:04:17.269 07:31:42 -- common/autotest_common.sh@955 -- # kill 54937 00:04:17.269 07:31:42 -- common/autotest_common.sh@960 -- # wait 54937 00:04:17.269 spdk_app_start is called in Round 0. 00:04:17.269 Shutdown signal received, stop current app iteration 00:04:17.269 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:17.269 spdk_app_start is called in Round 1. 00:04:17.269 Shutdown signal received, stop current app iteration 00:04:17.269 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:17.269 spdk_app_start is called in Round 2. 00:04:17.269 Shutdown signal received, stop current app iteration 00:04:17.269 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:17.269 spdk_app_start is called in Round 3. 00:04:17.269 Shutdown signal received, stop current app iteration 00:04:17.269 ************************************ 00:04:17.269 END TEST app_repeat 00:04:17.269 ************************************ 00:04:17.269 07:31:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:17.269 07:31:42 -- event/event.sh@42 -- # return 0 00:04:17.269 00:04:17.269 real 0m18.418s 00:04:17.269 user 0m41.772s 00:04:17.269 sys 0m2.328s 00:04:17.269 07:31:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.269 07:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:17.269 07:31:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:17.269 07:31:42 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:17.269 07:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.269 07:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.269 07:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:17.269 ************************************ 00:04:17.269 START TEST cpu_locks 00:04:17.269 ************************************ 00:04:17.269 07:31:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:17.269 * Looking for test storage... 00:04:17.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:17.269 07:31:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:17.269 07:31:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:17.269 07:31:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:17.269 07:31:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:17.269 07:31:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:17.269 07:31:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:17.269 07:31:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:17.269 07:31:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:17.269 07:31:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:17.269 07:31:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.269 07:31:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:17.269 07:31:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:17.269 07:31:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:17.269 07:31:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:17.270 07:31:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:17.270 07:31:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:17.270 07:31:42 -- scripts/common.sh@344 -- # : 1 00:04:17.270 07:31:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:17.270 07:31:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.270 07:31:42 -- scripts/common.sh@364 -- # decimal 1 00:04:17.270 07:31:42 -- scripts/common.sh@352 -- # local d=1 00:04:17.270 07:31:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.270 07:31:42 -- scripts/common.sh@354 -- # echo 1 00:04:17.270 07:31:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:17.270 07:31:42 -- scripts/common.sh@365 -- # decimal 2 00:04:17.270 07:31:42 -- scripts/common.sh@352 -- # local d=2 00:04:17.270 07:31:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.270 07:31:42 -- scripts/common.sh@354 -- # echo 2 00:04:17.270 07:31:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:17.270 07:31:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:17.270 07:31:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:17.270 07:31:42 -- scripts/common.sh@367 -- # return 0 00:04:17.270 07:31:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.270 07:31:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:17.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.270 --rc genhtml_branch_coverage=1 00:04:17.270 --rc genhtml_function_coverage=1 00:04:17.270 --rc genhtml_legend=1 00:04:17.270 --rc geninfo_all_blocks=1 00:04:17.270 --rc geninfo_unexecuted_blocks=1 00:04:17.270 00:04:17.270 ' 00:04:17.270 07:31:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:17.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.270 --rc genhtml_branch_coverage=1 00:04:17.270 --rc genhtml_function_coverage=1 00:04:17.270 --rc genhtml_legend=1 00:04:17.270 --rc geninfo_all_blocks=1 00:04:17.270 --rc geninfo_unexecuted_blocks=1 00:04:17.270 00:04:17.270 ' 00:04:17.270 07:31:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:17.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.270 --rc genhtml_branch_coverage=1 00:04:17.270 --rc genhtml_function_coverage=1 00:04:17.270 --rc genhtml_legend=1 00:04:17.270 --rc geninfo_all_blocks=1 00:04:17.270 --rc geninfo_unexecuted_blocks=1 00:04:17.270 00:04:17.270 ' 00:04:17.270 07:31:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:17.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.270 --rc genhtml_branch_coverage=1 00:04:17.270 --rc genhtml_function_coverage=1 00:04:17.270 --rc genhtml_legend=1 00:04:17.270 --rc geninfo_all_blocks=1 00:04:17.270 --rc geninfo_unexecuted_blocks=1 00:04:17.270 00:04:17.270 ' 00:04:17.270 07:31:42 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:17.270 07:31:42 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:17.270 07:31:42 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:17.270 07:31:42 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:17.270 07:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.270 07:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.270 07:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:17.270 ************************************ 00:04:17.270 START TEST default_locks 00:04:17.270 ************************************ 00:04:17.270 07:31:42 -- common/autotest_common.sh@1114 -- # default_locks 00:04:17.270 07:31:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55377 00:04:17.270 07:31:42 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.270 07:31:42 -- event/cpu_locks.sh@47 -- # waitforlisten 55377 00:04:17.270 07:31:42 -- common/autotest_common.sh@829 -- # '[' -z 55377 ']' 00:04:17.270 07:31:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.270 07:31:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.270 07:31:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.270 07:31:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.270 07:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:17.530 [2024-12-02 07:31:42.898137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:17.530 [2024-12-02 07:31:42.898223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55377 ] 00:04:17.530 [2024-12-02 07:31:43.023910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.530 [2024-12-02 07:31:43.071614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:17.530 [2024-12-02 07:31:43.071783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.462 07:31:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.462 07:31:43 -- common/autotest_common.sh@862 -- # return 0 00:04:18.462 07:31:43 -- event/cpu_locks.sh@49 -- # locks_exist 55377 00:04:18.462 07:31:43 -- event/cpu_locks.sh@22 -- # lslocks -p 55377 00:04:18.462 07:31:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:18.720 07:31:44 -- event/cpu_locks.sh@50 -- # killprocess 55377 00:04:18.720 07:31:44 -- common/autotest_common.sh@936 -- # '[' -z 55377 ']' 00:04:18.720 07:31:44 -- common/autotest_common.sh@940 -- # kill -0 55377 00:04:18.720 07:31:44 -- common/autotest_common.sh@941 -- # uname 00:04:18.720 07:31:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:18.720 07:31:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55377 00:04:18.720 07:31:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:18.720 07:31:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:18.720 killing process with pid 55377 00:04:18.720 07:31:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55377' 00:04:18.720 07:31:44 -- common/autotest_common.sh@955 -- # kill 55377 00:04:18.720 07:31:44 -- common/autotest_common.sh@960 -- # wait 55377 00:04:18.978 07:31:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55377 00:04:18.978 07:31:44 -- common/autotest_common.sh@650 -- # local es=0 00:04:18.978 07:31:44 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55377 00:04:18.978 07:31:44 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:18.978 07:31:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:18.978 07:31:44 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:18.978 07:31:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:18.978 07:31:44 -- common/autotest_common.sh@653 -- # waitforlisten 55377 00:04:18.978 07:31:44 -- common/autotest_common.sh@829 -- # '[' -z 55377 ']' 00:04:18.978 07:31:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.978 07:31:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.978 07:31:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.978 07:31:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.978 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.978 ERROR: process (pid: 55377) is no longer running 00:04:18.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55377) - No such process 00:04:18.978 07:31:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.978 07:31:44 -- common/autotest_common.sh@862 -- # return 1 00:04:18.978 07:31:44 -- common/autotest_common.sh@653 -- # es=1 00:04:18.978 07:31:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:18.978 07:31:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:18.978 07:31:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:18.978 07:31:44 -- event/cpu_locks.sh@54 -- # no_locks 00:04:18.978 07:31:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:18.978 07:31:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:18.978 07:31:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:18.978 00:04:18.978 real 0m1.691s 00:04:18.978 user 0m1.921s 00:04:18.978 sys 0m0.429s 00:04:18.978 07:31:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.978 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.978 ************************************ 00:04:18.978 END TEST default_locks 00:04:18.978 ************************************ 00:04:18.978 07:31:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:18.978 07:31:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.978 07:31:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.978 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.978 ************************************ 00:04:18.978 START TEST default_locks_via_rpc 00:04:18.978 ************************************ 00:04:18.978 07:31:44 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:04:18.978 07:31:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55423 00:04:18.978 07:31:44 -- event/cpu_locks.sh@63 -- # waitforlisten 55423 00:04:18.978 07:31:44 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.978 07:31:44 -- common/autotest_common.sh@829 -- # '[' -z 55423 ']' 00:04:18.978 07:31:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.978 07:31:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.978 07:31:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.978 07:31:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.978 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:04:19.237 [2024-12-02 07:31:44.654081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:19.237 [2024-12-02 07:31:44.654183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55423 ] 00:04:19.237 [2024-12-02 07:31:44.782105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.237 [2024-12-02 07:31:44.829788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:19.237 [2024-12-02 07:31:44.829920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.175 07:31:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.175 07:31:45 -- common/autotest_common.sh@862 -- # return 0 00:04:20.175 07:31:45 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:20.175 07:31:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.175 07:31:45 -- common/autotest_common.sh@10 -- # set +x 00:04:20.175 07:31:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.175 07:31:45 -- event/cpu_locks.sh@67 -- # no_locks 00:04:20.175 07:31:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:20.175 07:31:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:20.175 07:31:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:20.175 07:31:45 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:20.175 07:31:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.175 07:31:45 -- common/autotest_common.sh@10 -- # set +x 00:04:20.175 07:31:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.175 07:31:45 -- event/cpu_locks.sh@71 -- # locks_exist 55423 00:04:20.175 07:31:45 -- event/cpu_locks.sh@22 -- # lslocks -p 55423 00:04:20.175 07:31:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:20.433 07:31:46 -- event/cpu_locks.sh@73 -- # killprocess 55423 00:04:20.433 07:31:46 -- common/autotest_common.sh@936 -- # '[' -z 55423 ']' 00:04:20.433 07:31:46 -- common/autotest_common.sh@940 -- # kill -0 55423 00:04:20.433 07:31:46 -- common/autotest_common.sh@941 -- # uname 00:04:20.433 07:31:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:20.433 07:31:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55423 00:04:20.433 07:31:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:20.433 killing process with pid 55423 00:04:20.433 07:31:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:20.433 07:31:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55423' 00:04:20.433 07:31:46 -- common/autotest_common.sh@955 -- # kill 55423 00:04:20.433 07:31:46 -- common/autotest_common.sh@960 -- # wait 55423 00:04:20.691 00:04:20.691 real 0m1.706s 00:04:20.691 user 0m1.932s 00:04:20.691 sys 0m0.444s 00:04:20.691 07:31:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.691 07:31:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.691 ************************************ 00:04:20.691 END TEST default_locks_via_rpc 00:04:20.691 ************************************ 00:04:20.951 07:31:46 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:20.951 07:31:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.951 07:31:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.951 07:31:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.951 ************************************ 00:04:20.951 START TEST non_locking_app_on_locked_coremask 00:04:20.951 ************************************ 00:04:20.951 07:31:46 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:04:20.951 07:31:46 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55474 00:04:20.951 07:31:46 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.951 07:31:46 -- event/cpu_locks.sh@81 -- # waitforlisten 55474 /var/tmp/spdk.sock 00:04:20.951 07:31:46 -- common/autotest_common.sh@829 -- # '[' -z 55474 ']' 00:04:20.951 07:31:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.951 07:31:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.951 07:31:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.951 07:31:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.951 07:31:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.951 [2024-12-02 07:31:46.416257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:20.951 [2024-12-02 07:31:46.416377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55474 ] 00:04:20.951 [2024-12-02 07:31:46.544286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.210 [2024-12-02 07:31:46.594266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:21.210 [2024-12-02 07:31:46.594448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.146 07:31:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:22.146 07:31:47 -- common/autotest_common.sh@862 -- # return 0 00:04:22.146 07:31:47 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55490 00:04:22.146 07:31:47 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:22.146 07:31:47 -- event/cpu_locks.sh@85 -- # waitforlisten 55490 /var/tmp/spdk2.sock 00:04:22.146 07:31:47 -- common/autotest_common.sh@829 -- # '[' -z 55490 ']' 00:04:22.146 07:31:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:22.146 07:31:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:22.146 07:31:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:22.146 07:31:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.146 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:22.146 [2024-12-02 07:31:47.466845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:22.146 [2024-12-02 07:31:47.466965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55490 ] 00:04:22.146 [2024-12-02 07:31:47.605290] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:22.147 [2024-12-02 07:31:47.605338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.147 [2024-12-02 07:31:47.704425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:22.147 [2024-12-02 07:31:47.704552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.083 07:31:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.083 07:31:48 -- common/autotest_common.sh@862 -- # return 0 00:04:23.083 07:31:48 -- event/cpu_locks.sh@87 -- # locks_exist 55474 00:04:23.083 07:31:48 -- event/cpu_locks.sh@22 -- # lslocks -p 55474 00:04:23.083 07:31:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:23.342 07:31:48 -- event/cpu_locks.sh@89 -- # killprocess 55474 00:04:23.342 07:31:48 -- common/autotest_common.sh@936 -- # '[' -z 55474 ']' 00:04:23.342 07:31:48 -- common/autotest_common.sh@940 -- # kill -0 55474 00:04:23.342 07:31:48 -- common/autotest_common.sh@941 -- # uname 00:04:23.342 07:31:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:23.342 07:31:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55474 00:04:23.342 07:31:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:23.342 killing process with pid 55474 00:04:23.342 07:31:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:23.342 07:31:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55474' 00:04:23.342 07:31:48 -- common/autotest_common.sh@955 -- # kill 55474 00:04:23.342 07:31:48 -- common/autotest_common.sh@960 -- # wait 55474 00:04:23.909 07:31:49 -- event/cpu_locks.sh@90 -- # killprocess 55490 00:04:23.909 07:31:49 -- common/autotest_common.sh@936 -- # '[' -z 55490 ']' 00:04:23.909 07:31:49 -- common/autotest_common.sh@940 -- # kill -0 55490 00:04:23.909 07:31:49 -- common/autotest_common.sh@941 -- # uname 00:04:23.909 07:31:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:23.909 07:31:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55490 00:04:23.909 07:31:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:23.909 killing process with pid 55490 00:04:23.909 07:31:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:23.909 07:31:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55490' 00:04:23.909 07:31:49 -- common/autotest_common.sh@955 -- # kill 55490 00:04:23.909 07:31:49 -- common/autotest_common.sh@960 -- # wait 55490 00:04:24.168 00:04:24.168 real 0m3.348s 00:04:24.168 user 0m3.974s 00:04:24.168 sys 0m0.717s 00:04:24.168 07:31:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.168 07:31:49 -- common/autotest_common.sh@10 -- # set +x 00:04:24.168 ************************************ 00:04:24.168 END TEST non_locking_app_on_locked_coremask 00:04:24.168 ************************************ 00:04:24.168 07:31:49 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:24.168 07:31:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.168 07:31:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.168 07:31:49 -- common/autotest_common.sh@10 -- # set +x 00:04:24.168 ************************************ 00:04:24.168 START TEST locking_app_on_unlocked_coremask 00:04:24.168 ************************************ 00:04:24.168 07:31:49 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:04:24.168 07:31:49 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55546 00:04:24.168 07:31:49 -- event/cpu_locks.sh@99 -- # waitforlisten 55546 /var/tmp/spdk.sock 00:04:24.168 07:31:49 -- common/autotest_common.sh@829 -- # '[' -z 55546 ']' 00:04:24.168 07:31:49 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:24.168 07:31:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.168 07:31:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.168 07:31:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.168 07:31:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.168 07:31:49 -- common/autotest_common.sh@10 -- # set +x 00:04:24.427 [2024-12-02 07:31:49.799525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:24.427 [2024-12-02 07:31:49.799606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55546 ] 00:04:24.427 [2024-12-02 07:31:49.921606] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:24.427 [2024-12-02 07:31:49.921654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.427 [2024-12-02 07:31:49.970149] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:24.427 [2024-12-02 07:31:49.970333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.363 07:31:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.363 07:31:50 -- common/autotest_common.sh@862 -- # return 0 00:04:25.364 07:31:50 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55562 00:04:25.364 07:31:50 -- event/cpu_locks.sh@103 -- # waitforlisten 55562 /var/tmp/spdk2.sock 00:04:25.364 07:31:50 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:25.364 07:31:50 -- common/autotest_common.sh@829 -- # '[' -z 55562 ']' 00:04:25.364 07:31:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:25.364 07:31:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:25.364 07:31:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:25.364 07:31:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.364 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:04:25.364 [2024-12-02 07:31:50.821964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:25.364 [2024-12-02 07:31:50.822071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55562 ] 00:04:25.364 [2024-12-02 07:31:50.961134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.623 [2024-12-02 07:31:51.057551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:25.623 [2024-12-02 07:31:51.057733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.191 07:31:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.191 07:31:51 -- common/autotest_common.sh@862 -- # return 0 00:04:26.191 07:31:51 -- event/cpu_locks.sh@105 -- # locks_exist 55562 00:04:26.191 07:31:51 -- event/cpu_locks.sh@22 -- # lslocks -p 55562 00:04:26.191 07:31:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:26.759 07:31:52 -- event/cpu_locks.sh@107 -- # killprocess 55546 00:04:26.759 07:31:52 -- common/autotest_common.sh@936 -- # '[' -z 55546 ']' 00:04:26.759 07:31:52 -- common/autotest_common.sh@940 -- # kill -0 55546 00:04:26.759 07:31:52 -- common/autotest_common.sh@941 -- # uname 00:04:26.759 07:31:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.759 07:31:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55546 00:04:26.759 07:31:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.759 07:31:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.759 killing process with pid 55546 00:04:26.759 07:31:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55546' 00:04:26.759 07:31:52 -- common/autotest_common.sh@955 -- # kill 55546 00:04:26.759 07:31:52 -- common/autotest_common.sh@960 -- # wait 55546 00:04:27.328 07:31:52 -- event/cpu_locks.sh@108 -- # killprocess 55562 00:04:27.328 07:31:52 -- common/autotest_common.sh@936 -- # '[' -z 55562 ']' 00:04:27.328 07:31:52 -- common/autotest_common.sh@940 -- # kill -0 55562 00:04:27.328 07:31:52 -- common/autotest_common.sh@941 -- # uname 00:04:27.328 07:31:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:27.328 07:31:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55562 00:04:27.328 07:31:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:27.328 07:31:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:27.328 killing process with pid 55562 00:04:27.328 07:31:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55562' 00:04:27.328 07:31:52 -- common/autotest_common.sh@955 -- # kill 55562 00:04:27.328 07:31:52 -- common/autotest_common.sh@960 -- # wait 55562 00:04:27.587 00:04:27.587 real 0m3.276s 00:04:27.587 user 0m3.916s 00:04:27.587 sys 0m0.669s 00:04:27.587 07:31:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.587 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:04:27.587 ************************************ 00:04:27.587 END TEST locking_app_on_unlocked_coremask 00:04:27.587 ************************************ 00:04:27.587 07:31:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:27.587 07:31:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.587 07:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.587 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:04:27.587 ************************************ 00:04:27.587 START TEST locking_app_on_locked_coremask 00:04:27.587 ************************************ 00:04:27.587 07:31:53 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:04:27.587 07:31:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55624 00:04:27.587 07:31:53 -- event/cpu_locks.sh@116 -- # waitforlisten 55624 /var/tmp/spdk.sock 00:04:27.587 07:31:53 -- common/autotest_common.sh@829 -- # '[' -z 55624 ']' 00:04:27.587 07:31:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.587 07:31:53 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.587 07:31:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.587 07:31:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.587 07:31:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.587 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:04:27.587 [2024-12-02 07:31:53.145179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:27.587 [2024-12-02 07:31:53.145278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55624 ] 00:04:27.847 [2024-12-02 07:31:53.282310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.847 [2024-12-02 07:31:53.330671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:27.847 [2024-12-02 07:31:53.330873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.785 07:31:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.785 07:31:54 -- common/autotest_common.sh@862 -- # return 0 00:04:28.785 07:31:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55640 00:04:28.785 07:31:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55640 /var/tmp/spdk2.sock 00:04:28.785 07:31:54 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:28.785 07:31:54 -- common/autotest_common.sh@650 -- # local es=0 00:04:28.785 07:31:54 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55640 /var/tmp/spdk2.sock 00:04:28.785 07:31:54 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:28.785 07:31:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.785 07:31:54 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:28.785 07:31:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.785 07:31:54 -- common/autotest_common.sh@653 -- # waitforlisten 55640 /var/tmp/spdk2.sock 00:04:28.785 07:31:54 -- common/autotest_common.sh@829 -- # '[' -z 55640 ']' 00:04:28.785 07:31:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:28.785 07:31:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:28.785 07:31:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:28.785 07:31:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.785 07:31:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.785 [2024-12-02 07:31:54.184438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:28.785 [2024-12-02 07:31:54.184556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55640 ] 00:04:28.785 [2024-12-02 07:31:54.324147] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55624 has claimed it. 00:04:28.785 [2024-12-02 07:31:54.324227] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:29.354 ERROR: process (pid: 55640) is no longer running 00:04:29.354 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55640) - No such process 00:04:29.354 07:31:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.354 07:31:54 -- common/autotest_common.sh@862 -- # return 1 00:04:29.354 07:31:54 -- common/autotest_common.sh@653 -- # es=1 00:04:29.354 07:31:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:29.354 07:31:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:29.354 07:31:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:29.354 07:31:54 -- event/cpu_locks.sh@122 -- # locks_exist 55624 00:04:29.354 07:31:54 -- event/cpu_locks.sh@22 -- # lslocks -p 55624 00:04:29.354 07:31:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.613 07:31:55 -- event/cpu_locks.sh@124 -- # killprocess 55624 00:04:29.613 07:31:55 -- common/autotest_common.sh@936 -- # '[' -z 55624 ']' 00:04:29.613 07:31:55 -- common/autotest_common.sh@940 -- # kill -0 55624 00:04:29.613 07:31:55 -- common/autotest_common.sh@941 -- # uname 00:04:29.613 07:31:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.613 07:31:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55624 00:04:29.872 07:31:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.872 07:31:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.872 killing process with pid 55624 00:04:29.872 07:31:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55624' 00:04:29.872 07:31:55 -- common/autotest_common.sh@955 -- # kill 55624 00:04:29.872 07:31:55 -- common/autotest_common.sh@960 -- # wait 55624 00:04:29.872 00:04:29.872 real 0m2.415s 00:04:29.872 user 0m2.909s 00:04:29.872 sys 0m0.507s 00:04:29.872 07:31:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.872 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.872 ************************************ 00:04:29.872 END TEST locking_app_on_locked_coremask 00:04:29.872 ************************************ 00:04:30.131 07:31:55 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:30.131 07:31:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.131 07:31:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.131 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:30.131 ************************************ 00:04:30.131 START TEST locking_overlapped_coremask 00:04:30.131 ************************************ 00:04:30.131 07:31:55 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:04:30.131 07:31:55 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55680 00:04:30.131 07:31:55 -- event/cpu_locks.sh@133 -- # waitforlisten 55680 /var/tmp/spdk.sock 00:04:30.131 07:31:55 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:30.131 07:31:55 -- common/autotest_common.sh@829 -- # '[' -z 55680 ']' 00:04:30.131 07:31:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.131 07:31:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.131 07:31:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.131 07:31:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.131 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:30.131 [2024-12-02 07:31:55.595899] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:30.131 [2024-12-02 07:31:55.596001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55680 ] 00:04:30.131 [2024-12-02 07:31:55.724550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:30.391 [2024-12-02 07:31:55.777399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.391 [2024-12-02 07:31:55.777598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.391 [2024-12-02 07:31:55.778621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:30.391 [2024-12-02 07:31:55.778633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.327 07:31:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.327 07:31:56 -- common/autotest_common.sh@862 -- # return 0 00:04:31.327 07:31:56 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55698 00:04:31.327 07:31:56 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55698 /var/tmp/spdk2.sock 00:04:31.327 07:31:56 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:31.327 07:31:56 -- common/autotest_common.sh@650 -- # local es=0 00:04:31.327 07:31:56 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55698 /var/tmp/spdk2.sock 00:04:31.327 07:31:56 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:31.327 07:31:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.327 07:31:56 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:31.327 07:31:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.327 07:31:56 -- common/autotest_common.sh@653 -- # waitforlisten 55698 /var/tmp/spdk2.sock 00:04:31.327 07:31:56 -- common/autotest_common.sh@829 -- # '[' -z 55698 ']' 00:04:31.327 07:31:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.327 07:31:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.327 07:31:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.327 07:31:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.327 07:31:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.327 [2024-12-02 07:31:56.664530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:31.327 [2024-12-02 07:31:56.664638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55698 ] 00:04:31.327 [2024-12-02 07:31:56.801851] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55680 has claimed it. 00:04:31.327 [2024-12-02 07:31:56.805337] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:31.895 ERROR: process (pid: 55698) is no longer running 00:04:31.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55698) - No such process 00:04:31.895 07:31:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.895 07:31:57 -- common/autotest_common.sh@862 -- # return 1 00:04:31.895 07:31:57 -- common/autotest_common.sh@653 -- # es=1 00:04:31.895 07:31:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.895 07:31:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.895 07:31:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.895 07:31:57 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:31.895 07:31:57 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:31.895 07:31:57 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:31.895 07:31:57 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:31.895 07:31:57 -- event/cpu_locks.sh@141 -- # killprocess 55680 00:04:31.895 07:31:57 -- common/autotest_common.sh@936 -- # '[' -z 55680 ']' 00:04:31.895 07:31:57 -- common/autotest_common.sh@940 -- # kill -0 55680 00:04:31.895 07:31:57 -- common/autotest_common.sh@941 -- # uname 00:04:31.895 07:31:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:31.895 07:31:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55680 00:04:31.895 07:31:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:31.895 07:31:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:31.895 killing process with pid 55680 00:04:31.895 07:31:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55680' 00:04:31.895 07:31:57 -- common/autotest_common.sh@955 -- # kill 55680 00:04:31.895 07:31:57 -- common/autotest_common.sh@960 -- # wait 55680 00:04:32.154 00:04:32.154 real 0m2.098s 00:04:32.154 user 0m6.103s 00:04:32.154 sys 0m0.303s 00:04:32.154 07:31:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:32.154 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:04:32.154 ************************************ 00:04:32.154 END TEST locking_overlapped_coremask 00:04:32.154 ************************************ 00:04:32.154 07:31:57 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:32.154 07:31:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:32.154 07:31:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:32.154 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:04:32.154 ************************************ 00:04:32.154 START TEST locking_overlapped_coremask_via_rpc 00:04:32.154 ************************************ 00:04:32.154 07:31:57 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:04:32.154 07:31:57 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55738 00:04:32.154 07:31:57 -- event/cpu_locks.sh@149 -- # waitforlisten 55738 /var/tmp/spdk.sock 00:04:32.154 07:31:57 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:32.154 07:31:57 -- common/autotest_common.sh@829 -- # '[' -z 55738 ']' 00:04:32.154 07:31:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.154 07:31:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.154 07:31:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.154 07:31:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.154 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:04:32.154 [2024-12-02 07:31:57.762272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:32.154 [2024-12-02 07:31:57.762416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55738 ] 00:04:32.413 [2024-12-02 07:31:57.899055] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.413 [2024-12-02 07:31:57.899106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:32.413 [2024-12-02 07:31:57.952400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:32.413 [2024-12-02 07:31:57.952714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.413 [2024-12-02 07:31:57.953412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.413 [2024-12-02 07:31:57.953420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.350 07:31:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.350 07:31:58 -- common/autotest_common.sh@862 -- # return 0 00:04:33.350 07:31:58 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55756 00:04:33.350 07:31:58 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:33.350 07:31:58 -- event/cpu_locks.sh@153 -- # waitforlisten 55756 /var/tmp/spdk2.sock 00:04:33.350 07:31:58 -- common/autotest_common.sh@829 -- # '[' -z 55756 ']' 00:04:33.350 07:31:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.350 07:31:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.350 07:31:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.350 07:31:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.350 07:31:58 -- common/autotest_common.sh@10 -- # set +x 00:04:33.350 [2024-12-02 07:31:58.739636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:33.350 [2024-12-02 07:31:58.739743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55756 ] 00:04:33.350 [2024-12-02 07:31:58.879680] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.350 [2024-12-02 07:31:58.883368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:33.609 [2024-12-02 07:31:58.987227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:33.609 [2024-12-02 07:31:58.991606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.609 [2024-12-02 07:31:58.991760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:33.609 [2024-12-02 07:31:58.991856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.177 07:31:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.177 07:31:59 -- common/autotest_common.sh@862 -- # return 0 00:04:34.177 07:31:59 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:34.177 07:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.177 07:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.177 07:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.177 07:31:59 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:34.177 07:31:59 -- common/autotest_common.sh@650 -- # local es=0 00:04:34.177 07:31:59 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:34.177 07:31:59 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:34.177 07:31:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.177 07:31:59 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:34.177 07:31:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.177 07:31:59 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:34.177 07:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.177 07:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.177 [2024-12-02 07:31:59.685497] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55738 has claimed it. 00:04:34.177 request: 00:04:34.177 { 00:04:34.177 "method": "framework_enable_cpumask_locks", 00:04:34.177 "req_id": 1 00:04:34.177 } 00:04:34.177 Got JSON-RPC error response 00:04:34.177 response: 00:04:34.177 { 00:04:34.177 "code": -32603, 00:04:34.177 "message": "Failed to claim CPU core: 2" 00:04:34.177 } 00:04:34.177 07:31:59 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:34.177 07:31:59 -- common/autotest_common.sh@653 -- # es=1 00:04:34.177 07:31:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:34.177 07:31:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:34.177 07:31:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:34.177 07:31:59 -- event/cpu_locks.sh@158 -- # waitforlisten 55738 /var/tmp/spdk.sock 00:04:34.177 07:31:59 -- common/autotest_common.sh@829 -- # '[' -z 55738 ']' 00:04:34.177 07:31:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.177 07:31:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.177 07:31:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.178 07:31:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.178 07:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.437 07:31:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.437 07:31:59 -- common/autotest_common.sh@862 -- # return 0 00:04:34.437 07:31:59 -- event/cpu_locks.sh@159 -- # waitforlisten 55756 /var/tmp/spdk2.sock 00:04:34.437 07:31:59 -- common/autotest_common.sh@829 -- # '[' -z 55756 ']' 00:04:34.437 07:31:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.437 07:31:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.437 07:31:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.437 07:31:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.437 07:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.696 07:32:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.696 07:32:00 -- common/autotest_common.sh@862 -- # return 0 00:04:34.696 07:32:00 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:34.696 07:32:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:34.696 07:32:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:34.696 07:32:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:34.696 00:04:34.696 real 0m2.550s 00:04:34.696 user 0m1.304s 00:04:34.696 sys 0m0.176s 00:04:34.696 07:32:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.696 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:04:34.696 ************************************ 00:04:34.696 END TEST locking_overlapped_coremask_via_rpc 00:04:34.696 ************************************ 00:04:34.696 07:32:00 -- event/cpu_locks.sh@174 -- # cleanup 00:04:34.696 07:32:00 -- event/cpu_locks.sh@15 -- # [[ -z 55738 ]] 00:04:34.696 07:32:00 -- event/cpu_locks.sh@15 -- # killprocess 55738 00:04:34.696 07:32:00 -- common/autotest_common.sh@936 -- # '[' -z 55738 ']' 00:04:34.696 07:32:00 -- common/autotest_common.sh@940 -- # kill -0 55738 00:04:34.696 07:32:00 -- common/autotest_common.sh@941 -- # uname 00:04:34.696 07:32:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:34.696 07:32:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55738 00:04:34.696 07:32:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:34.696 07:32:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:34.696 killing process with pid 55738 00:04:34.696 07:32:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55738' 00:04:34.697 07:32:00 -- common/autotest_common.sh@955 -- # kill 55738 00:04:34.697 07:32:00 -- common/autotest_common.sh@960 -- # wait 55738 00:04:35.265 07:32:00 -- event/cpu_locks.sh@16 -- # [[ -z 55756 ]] 00:04:35.265 07:32:00 -- event/cpu_locks.sh@16 -- # killprocess 55756 00:04:35.265 07:32:00 -- common/autotest_common.sh@936 -- # '[' -z 55756 ']' 00:04:35.265 07:32:00 -- common/autotest_common.sh@940 -- # kill -0 55756 00:04:35.265 07:32:00 -- common/autotest_common.sh@941 -- # uname 00:04:35.265 07:32:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:35.265 07:32:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55756 00:04:35.265 07:32:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:35.265 07:32:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:35.265 killing process with pid 55756 00:04:35.265 07:32:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55756' 00:04:35.265 07:32:00 -- common/autotest_common.sh@955 -- # kill 55756 00:04:35.265 07:32:00 -- common/autotest_common.sh@960 -- # wait 55756 00:04:35.265 07:32:00 -- event/cpu_locks.sh@18 -- # rm -f 00:04:35.265 07:32:00 -- event/cpu_locks.sh@1 -- # cleanup 00:04:35.265 07:32:00 -- event/cpu_locks.sh@15 -- # [[ -z 55738 ]] 00:04:35.265 07:32:00 -- event/cpu_locks.sh@15 -- # killprocess 55738 00:04:35.265 07:32:00 -- common/autotest_common.sh@936 -- # '[' -z 55738 ']' 00:04:35.265 07:32:00 -- common/autotest_common.sh@940 -- # kill -0 55738 00:04:35.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55738) - No such process 00:04:35.265 Process with pid 55738 is not found 00:04:35.265 07:32:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55738 is not found' 00:04:35.265 07:32:00 -- event/cpu_locks.sh@16 -- # [[ -z 55756 ]] 00:04:35.265 07:32:00 -- event/cpu_locks.sh@16 -- # killprocess 55756 00:04:35.265 07:32:00 -- common/autotest_common.sh@936 -- # '[' -z 55756 ']' 00:04:35.265 07:32:00 -- common/autotest_common.sh@940 -- # kill -0 55756 00:04:35.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55756) - No such process 00:04:35.265 Process with pid 55756 is not found 00:04:35.265 07:32:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55756 is not found' 00:04:35.265 07:32:00 -- event/cpu_locks.sh@18 -- # rm -f 00:04:35.265 00:04:35.265 real 0m18.209s 00:04:35.265 user 0m33.735s 00:04:35.265 sys 0m3.897s 00:04:35.265 07:32:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.265 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:04:35.265 ************************************ 00:04:35.265 END TEST cpu_locks 00:04:35.265 ************************************ 00:04:35.525 ************************************ 00:04:35.525 END TEST event 00:04:35.525 ************************************ 00:04:35.525 00:04:35.525 real 0m44.515s 00:04:35.525 user 1m28.497s 00:04:35.525 sys 0m6.900s 00:04:35.525 07:32:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.525 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:04:35.525 07:32:00 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:35.525 07:32:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.525 07:32:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.525 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:04:35.525 ************************************ 00:04:35.525 START TEST thread 00:04:35.525 ************************************ 00:04:35.525 07:32:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:35.525 * Looking for test storage... 00:04:35.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:35.525 07:32:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.525 07:32:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.525 07:32:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.525 07:32:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.525 07:32:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.525 07:32:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.525 07:32:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.525 07:32:01 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.525 07:32:01 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.525 07:32:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.525 07:32:01 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.525 07:32:01 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.525 07:32:01 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.525 07:32:01 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.525 07:32:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.525 07:32:01 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.525 07:32:01 -- scripts/common.sh@344 -- # : 1 00:04:35.525 07:32:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.525 07:32:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.525 07:32:01 -- scripts/common.sh@364 -- # decimal 1 00:04:35.525 07:32:01 -- scripts/common.sh@352 -- # local d=1 00:04:35.525 07:32:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.525 07:32:01 -- scripts/common.sh@354 -- # echo 1 00:04:35.525 07:32:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.525 07:32:01 -- scripts/common.sh@365 -- # decimal 2 00:04:35.525 07:32:01 -- scripts/common.sh@352 -- # local d=2 00:04:35.525 07:32:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.525 07:32:01 -- scripts/common.sh@354 -- # echo 2 00:04:35.525 07:32:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.525 07:32:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.525 07:32:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.525 07:32:01 -- scripts/common.sh@367 -- # return 0 00:04:35.525 07:32:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.525 07:32:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.525 --rc genhtml_branch_coverage=1 00:04:35.525 --rc genhtml_function_coverage=1 00:04:35.525 --rc genhtml_legend=1 00:04:35.525 --rc geninfo_all_blocks=1 00:04:35.525 --rc geninfo_unexecuted_blocks=1 00:04:35.525 00:04:35.525 ' 00:04:35.525 07:32:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.525 --rc genhtml_branch_coverage=1 00:04:35.525 --rc genhtml_function_coverage=1 00:04:35.525 --rc genhtml_legend=1 00:04:35.525 --rc geninfo_all_blocks=1 00:04:35.525 --rc geninfo_unexecuted_blocks=1 00:04:35.525 00:04:35.525 ' 00:04:35.525 07:32:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.525 --rc genhtml_branch_coverage=1 00:04:35.525 --rc genhtml_function_coverage=1 00:04:35.525 --rc genhtml_legend=1 00:04:35.525 --rc geninfo_all_blocks=1 00:04:35.525 --rc geninfo_unexecuted_blocks=1 00:04:35.525 00:04:35.525 ' 00:04:35.525 07:32:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.525 --rc genhtml_branch_coverage=1 00:04:35.525 --rc genhtml_function_coverage=1 00:04:35.525 --rc genhtml_legend=1 00:04:35.525 --rc geninfo_all_blocks=1 00:04:35.525 --rc geninfo_unexecuted_blocks=1 00:04:35.525 00:04:35.525 ' 00:04:35.525 07:32:01 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:35.525 07:32:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:35.525 07:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.525 07:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:35.784 ************************************ 00:04:35.784 START TEST thread_poller_perf 00:04:35.784 ************************************ 00:04:35.784 07:32:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:35.784 [2024-12-02 07:32:01.172327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:35.784 [2024-12-02 07:32:01.172413] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55893 ] 00:04:35.784 [2024-12-02 07:32:01.310715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.784 [2024-12-02 07:32:01.378883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.784 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:37.162 [2024-12-02T07:32:02.786Z] ====================================== 00:04:37.162 [2024-12-02T07:32:02.786Z] busy:2210399014 (cyc) 00:04:37.162 [2024-12-02T07:32:02.786Z] total_run_count: 358000 00:04:37.162 [2024-12-02T07:32:02.786Z] tsc_hz: 2200000000 (cyc) 00:04:37.162 [2024-12-02T07:32:02.786Z] ====================================== 00:04:37.162 [2024-12-02T07:32:02.786Z] poller_cost: 6174 (cyc), 2806 (nsec) 00:04:37.162 00:04:37.162 real 0m1.307s 00:04:37.162 user 0m1.152s 00:04:37.162 sys 0m0.048s 00:04:37.162 07:32:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.162 07:32:02 -- common/autotest_common.sh@10 -- # set +x 00:04:37.162 ************************************ 00:04:37.162 END TEST thread_poller_perf 00:04:37.162 ************************************ 00:04:37.162 07:32:02 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:37.162 07:32:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:37.162 07:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.162 07:32:02 -- common/autotest_common.sh@10 -- # set +x 00:04:37.162 ************************************ 00:04:37.162 START TEST thread_poller_perf 00:04:37.162 ************************************ 00:04:37.162 07:32:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:37.162 [2024-12-02 07:32:02.527784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:37.162 [2024-12-02 07:32:02.527874] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55923 ] 00:04:37.162 [2024-12-02 07:32:02.660778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.162 [2024-12-02 07:32:02.710469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.162 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:38.536 [2024-12-02T07:32:04.160Z] ====================================== 00:04:38.536 [2024-12-02T07:32:04.160Z] busy:2202299056 (cyc) 00:04:38.536 [2024-12-02T07:32:04.160Z] total_run_count: 5188000 00:04:38.536 [2024-12-02T07:32:04.160Z] tsc_hz: 2200000000 (cyc) 00:04:38.536 [2024-12-02T07:32:04.160Z] ====================================== 00:04:38.536 [2024-12-02T07:32:04.160Z] poller_cost: 424 (cyc), 192 (nsec) 00:04:38.536 00:04:38.536 real 0m1.275s 00:04:38.536 user 0m1.128s 00:04:38.536 sys 0m0.041s 00:04:38.536 07:32:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.536 07:32:03 -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 ************************************ 00:04:38.536 END TEST thread_poller_perf 00:04:38.536 ************************************ 00:04:38.536 07:32:03 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:38.536 00:04:38.536 real 0m2.858s 00:04:38.536 user 0m2.432s 00:04:38.536 sys 0m0.215s 00:04:38.536 07:32:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.536 07:32:03 -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 ************************************ 00:04:38.536 END TEST thread 00:04:38.536 ************************************ 00:04:38.536 07:32:03 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:38.536 07:32:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.536 07:32:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.536 07:32:03 -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 ************************************ 00:04:38.536 START TEST accel 00:04:38.536 ************************************ 00:04:38.536 07:32:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:38.536 * Looking for test storage... 00:04:38.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:38.536 07:32:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.536 07:32:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:38.536 07:32:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.536 07:32:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:38.536 07:32:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:38.536 07:32:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:38.536 07:32:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:38.536 07:32:04 -- scripts/common.sh@335 -- # IFS=.-: 00:04:38.536 07:32:04 -- scripts/common.sh@335 -- # read -ra ver1 00:04:38.536 07:32:04 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.536 07:32:04 -- scripts/common.sh@336 -- # read -ra ver2 00:04:38.536 07:32:04 -- scripts/common.sh@337 -- # local 'op=<' 00:04:38.536 07:32:04 -- scripts/common.sh@339 -- # ver1_l=2 00:04:38.536 07:32:04 -- scripts/common.sh@340 -- # ver2_l=1 00:04:38.536 07:32:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:38.536 07:32:04 -- scripts/common.sh@343 -- # case "$op" in 00:04:38.536 07:32:04 -- scripts/common.sh@344 -- # : 1 00:04:38.536 07:32:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:38.536 07:32:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.536 07:32:04 -- scripts/common.sh@364 -- # decimal 1 00:04:38.536 07:32:04 -- scripts/common.sh@352 -- # local d=1 00:04:38.536 07:32:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.536 07:32:04 -- scripts/common.sh@354 -- # echo 1 00:04:38.536 07:32:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:38.536 07:32:04 -- scripts/common.sh@365 -- # decimal 2 00:04:38.536 07:32:04 -- scripts/common.sh@352 -- # local d=2 00:04:38.536 07:32:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.536 07:32:04 -- scripts/common.sh@354 -- # echo 2 00:04:38.536 07:32:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:38.536 07:32:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:38.536 07:32:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:38.536 07:32:04 -- scripts/common.sh@367 -- # return 0 00:04:38.536 07:32:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.536 07:32:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:38.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.536 --rc genhtml_branch_coverage=1 00:04:38.536 --rc genhtml_function_coverage=1 00:04:38.536 --rc genhtml_legend=1 00:04:38.536 --rc geninfo_all_blocks=1 00:04:38.536 --rc geninfo_unexecuted_blocks=1 00:04:38.536 00:04:38.536 ' 00:04:38.536 07:32:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:38.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.536 --rc genhtml_branch_coverage=1 00:04:38.536 --rc genhtml_function_coverage=1 00:04:38.536 --rc genhtml_legend=1 00:04:38.536 --rc geninfo_all_blocks=1 00:04:38.536 --rc geninfo_unexecuted_blocks=1 00:04:38.536 00:04:38.536 ' 00:04:38.536 07:32:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:38.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.536 --rc genhtml_branch_coverage=1 00:04:38.536 --rc genhtml_function_coverage=1 00:04:38.536 --rc genhtml_legend=1 00:04:38.536 --rc geninfo_all_blocks=1 00:04:38.536 --rc geninfo_unexecuted_blocks=1 00:04:38.536 00:04:38.536 ' 00:04:38.536 07:32:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:38.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.536 --rc genhtml_branch_coverage=1 00:04:38.536 --rc genhtml_function_coverage=1 00:04:38.536 --rc genhtml_legend=1 00:04:38.536 --rc geninfo_all_blocks=1 00:04:38.536 --rc geninfo_unexecuted_blocks=1 00:04:38.536 00:04:38.536 ' 00:04:38.536 07:32:04 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:04:38.536 07:32:04 -- accel/accel.sh@74 -- # get_expected_opcs 00:04:38.536 07:32:04 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.536 07:32:04 -- accel/accel.sh@59 -- # spdk_tgt_pid=56010 00:04:38.536 07:32:04 -- accel/accel.sh@60 -- # waitforlisten 56010 00:04:38.536 07:32:04 -- common/autotest_common.sh@829 -- # '[' -z 56010 ']' 00:04:38.536 07:32:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.536 07:32:04 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:38.536 07:32:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.536 07:32:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.536 07:32:04 -- accel/accel.sh@58 -- # build_accel_config 00:04:38.536 07:32:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.536 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:04:38.536 07:32:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:38.536 07:32:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:38.536 07:32:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:38.536 07:32:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:38.536 07:32:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:38.536 07:32:04 -- accel/accel.sh@41 -- # local IFS=, 00:04:38.536 07:32:04 -- accel/accel.sh@42 -- # jq -r . 00:04:38.536 [2024-12-02 07:32:04.116944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.536 [2024-12-02 07:32:04.117065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56010 ] 00:04:38.795 [2024-12-02 07:32:04.251997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.795 [2024-12-02 07:32:04.300221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.795 [2024-12-02 07:32:04.300425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.727 07:32:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.727 07:32:05 -- common/autotest_common.sh@862 -- # return 0 00:04:39.727 07:32:05 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:39.727 07:32:05 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:04:39.727 07:32:05 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:39.727 07:32:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.727 07:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.727 07:32:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # IFS== 00:04:39.727 07:32:05 -- accel/accel.sh@64 -- # read -r opc module 00:04:39.727 07:32:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:39.727 07:32:05 -- accel/accel.sh@67 -- # killprocess 56010 00:04:39.727 07:32:05 -- common/autotest_common.sh@936 -- # '[' -z 56010 ']' 00:04:39.727 07:32:05 -- common/autotest_common.sh@940 -- # kill -0 56010 00:04:39.727 07:32:05 -- common/autotest_common.sh@941 -- # uname 00:04:39.727 07:32:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:39.727 07:32:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56010 00:04:39.727 killing process with pid 56010 00:04:39.727 07:32:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:39.727 07:32:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:39.728 07:32:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56010' 00:04:39.728 07:32:05 -- common/autotest_common.sh@955 -- # kill 56010 00:04:39.728 07:32:05 -- common/autotest_common.sh@960 -- # wait 56010 00:04:39.985 07:32:05 -- accel/accel.sh@68 -- # trap - ERR 00:04:39.985 07:32:05 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:04:39.985 07:32:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:04:39.985 07:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.985 07:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.985 07:32:05 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:04:39.985 07:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:39.985 07:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:39.985 07:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:39.985 07:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:39.985 07:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:39.985 07:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:39.985 07:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:39.985 07:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:39.985 07:32:05 -- accel/accel.sh@42 -- # jq -r . 00:04:39.985 07:32:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.985 07:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.985 07:32:05 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:39.985 07:32:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:39.985 07:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.985 07:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.985 ************************************ 00:04:39.985 START TEST accel_missing_filename 00:04:39.985 ************************************ 00:04:39.985 07:32:05 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:04:39.985 07:32:05 -- common/autotest_common.sh@650 -- # local es=0 00:04:39.985 07:32:05 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:39.985 07:32:05 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:39.985 07:32:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.985 07:32:05 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:39.985 07:32:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.985 07:32:05 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:04:39.985 07:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:39.985 07:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:39.985 07:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:39.985 07:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:39.985 07:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:39.985 07:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:39.985 07:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:39.985 07:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:39.985 07:32:05 -- accel/accel.sh@42 -- # jq -r . 00:04:39.985 [2024-12-02 07:32:05.554156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:39.985 [2024-12-02 07:32:05.554242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56056 ] 00:04:40.243 [2024-12-02 07:32:05.689861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.243 [2024-12-02 07:32:05.736889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.243 [2024-12-02 07:32:05.762901] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.243 [2024-12-02 07:32:05.798190] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:40.501 A filename is required. 00:04:40.501 07:32:05 -- common/autotest_common.sh@653 -- # es=234 00:04:40.501 07:32:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.501 07:32:05 -- common/autotest_common.sh@662 -- # es=106 00:04:40.501 07:32:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:04:40.501 07:32:05 -- common/autotest_common.sh@670 -- # es=1 00:04:40.501 07:32:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.501 00:04:40.501 real 0m0.351s 00:04:40.501 user 0m0.223s 00:04:40.501 sys 0m0.071s 00:04:40.501 ************************************ 00:04:40.501 END TEST accel_missing_filename 00:04:40.501 ************************************ 00:04:40.501 07:32:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.501 07:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:40.501 07:32:05 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:40.501 07:32:05 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:40.501 07:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.501 07:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:40.501 ************************************ 00:04:40.501 START TEST accel_compress_verify 00:04:40.501 ************************************ 00:04:40.501 07:32:05 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:40.501 07:32:05 -- common/autotest_common.sh@650 -- # local es=0 00:04:40.501 07:32:05 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:40.501 07:32:05 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:40.501 07:32:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.501 07:32:05 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:40.501 07:32:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.501 07:32:05 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:40.501 07:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:40.501 07:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:40.501 07:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:40.501 07:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:40.501 07:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:40.501 07:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:40.501 07:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:40.501 07:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:40.501 07:32:05 -- accel/accel.sh@42 -- # jq -r . 00:04:40.501 [2024-12-02 07:32:05.954313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:40.501 [2024-12-02 07:32:05.954407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56075 ] 00:04:40.501 [2024-12-02 07:32:06.087664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.760 [2024-12-02 07:32:06.141876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.760 [2024-12-02 07:32:06.169800] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.760 [2024-12-02 07:32:06.205418] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:40.760 00:04:40.760 Compression does not support the verify option, aborting. 00:04:40.760 07:32:06 -- common/autotest_common.sh@653 -- # es=161 00:04:40.760 07:32:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.760 07:32:06 -- common/autotest_common.sh@662 -- # es=33 00:04:40.760 07:32:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:04:40.760 07:32:06 -- common/autotest_common.sh@670 -- # es=1 00:04:40.760 07:32:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.760 00:04:40.760 real 0m0.354s 00:04:40.760 user 0m0.230s 00:04:40.760 sys 0m0.075s 00:04:40.760 ************************************ 00:04:40.760 END TEST accel_compress_verify 00:04:40.760 ************************************ 00:04:40.760 07:32:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.760 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.760 07:32:06 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:40.760 07:32:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:40.760 07:32:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.760 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.760 ************************************ 00:04:40.760 START TEST accel_wrong_workload 00:04:40.760 ************************************ 00:04:40.760 07:32:06 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:04:40.760 07:32:06 -- common/autotest_common.sh@650 -- # local es=0 00:04:40.760 07:32:06 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:40.760 07:32:06 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:40.760 07:32:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.760 07:32:06 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:40.760 07:32:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.760 07:32:06 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:04:40.760 07:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:40.760 07:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:04:40.760 07:32:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:40.760 07:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:40.760 07:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:40.760 07:32:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:40.760 07:32:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:40.760 07:32:06 -- accel/accel.sh@41 -- # local IFS=, 00:04:40.760 07:32:06 -- accel/accel.sh@42 -- # jq -r . 00:04:40.760 Unsupported workload type: foobar 00:04:40.760 [2024-12-02 07:32:06.356726] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:40.760 accel_perf options: 00:04:40.760 [-h help message] 00:04:40.760 [-q queue depth per core] 00:04:40.760 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:40.760 [-T number of threads per core 00:04:40.760 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:40.760 [-t time in seconds] 00:04:40.760 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:40.760 [ dif_verify, , dif_generate, dif_generate_copy 00:04:40.760 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:40.760 [-l for compress/decompress workloads, name of uncompressed input file 00:04:40.760 [-S for crc32c workload, use this seed value (default 0) 00:04:40.760 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:40.760 [-f for fill workload, use this BYTE value (default 255) 00:04:40.760 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:40.760 [-y verify result if this switch is on] 00:04:40.760 [-a tasks to allocate per core (default: same value as -q)] 00:04:40.760 Can be used to spread operations across a wider range of memory. 00:04:40.760 07:32:06 -- common/autotest_common.sh@653 -- # es=1 00:04:40.760 07:32:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.760 07:32:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.760 ************************************ 00:04:40.760 END TEST accel_wrong_workload 00:04:40.760 ************************************ 00:04:40.760 07:32:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.761 00:04:40.761 real 0m0.029s 00:04:40.761 user 0m0.017s 00:04:40.761 sys 0m0.012s 00:04:40.761 07:32:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.761 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:04:41.020 07:32:06 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:41.020 07:32:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:41.020 07:32:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.020 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:04:41.020 ************************************ 00:04:41.020 START TEST accel_negative_buffers 00:04:41.020 ************************************ 00:04:41.020 07:32:06 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:41.020 07:32:06 -- common/autotest_common.sh@650 -- # local es=0 00:04:41.020 07:32:06 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:41.020 07:32:06 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:41.020 07:32:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.020 07:32:06 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:41.020 07:32:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.020 07:32:06 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:04:41.020 07:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:41.020 07:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:04:41.020 07:32:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:41.020 07:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:41.020 07:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:41.020 07:32:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:41.020 07:32:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:41.020 07:32:06 -- accel/accel.sh@41 -- # local IFS=, 00:04:41.020 07:32:06 -- accel/accel.sh@42 -- # jq -r . 00:04:41.020 -x option must be non-negative. 00:04:41.020 [2024-12-02 07:32:06.433946] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:41.020 accel_perf options: 00:04:41.020 [-h help message] 00:04:41.020 [-q queue depth per core] 00:04:41.020 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:41.020 [-T number of threads per core 00:04:41.020 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:41.020 [-t time in seconds] 00:04:41.020 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:41.020 [ dif_verify, , dif_generate, dif_generate_copy 00:04:41.020 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:41.020 [-l for compress/decompress workloads, name of uncompressed input file 00:04:41.020 [-S for crc32c workload, use this seed value (default 0) 00:04:41.020 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:41.020 [-f for fill workload, use this BYTE value (default 255) 00:04:41.020 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:41.020 [-y verify result if this switch is on] 00:04:41.020 [-a tasks to allocate per core (default: same value as -q)] 00:04:41.020 Can be used to spread operations across a wider range of memory. 00:04:41.020 ************************************ 00:04:41.020 END TEST accel_negative_buffers 00:04:41.020 ************************************ 00:04:41.020 07:32:06 -- common/autotest_common.sh@653 -- # es=1 00:04:41.020 07:32:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.020 07:32:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:41.020 07:32:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.020 00:04:41.020 real 0m0.032s 00:04:41.020 user 0m0.019s 00:04:41.020 sys 0m0.012s 00:04:41.020 07:32:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.020 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:04:41.020 07:32:06 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:41.020 07:32:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:41.020 07:32:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.020 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:04:41.020 ************************************ 00:04:41.020 START TEST accel_crc32c 00:04:41.020 ************************************ 00:04:41.020 07:32:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:41.020 07:32:06 -- accel/accel.sh@16 -- # local accel_opc 00:04:41.020 07:32:06 -- accel/accel.sh@17 -- # local accel_module 00:04:41.020 07:32:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:41.020 07:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:41.020 07:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:04:41.020 07:32:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:41.020 07:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:41.020 07:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:41.020 07:32:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:41.020 07:32:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:41.020 07:32:06 -- accel/accel.sh@41 -- # local IFS=, 00:04:41.020 07:32:06 -- accel/accel.sh@42 -- # jq -r . 00:04:41.020 [2024-12-02 07:32:06.513365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:41.020 [2024-12-02 07:32:06.513449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56139 ] 00:04:41.279 [2024-12-02 07:32:06.649920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.279 [2024-12-02 07:32:06.702795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.656 07:32:07 -- accel/accel.sh@18 -- # out=' 00:04:42.657 SPDK Configuration: 00:04:42.657 Core mask: 0x1 00:04:42.657 00:04:42.657 Accel Perf Configuration: 00:04:42.657 Workload Type: crc32c 00:04:42.657 CRC-32C seed: 32 00:04:42.657 Transfer size: 4096 bytes 00:04:42.657 Vector count 1 00:04:42.657 Module: software 00:04:42.657 Queue depth: 32 00:04:42.657 Allocate depth: 32 00:04:42.657 # threads/core: 1 00:04:42.657 Run time: 1 seconds 00:04:42.657 Verify: Yes 00:04:42.657 00:04:42.657 Running for 1 seconds... 00:04:42.657 00:04:42.657 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:42.657 ------------------------------------------------------------------------------------ 00:04:42.657 0,0 557184/s 2176 MiB/s 0 0 00:04:42.657 ==================================================================================== 00:04:42.657 Total 557184/s 2176 MiB/s 0 0' 00:04:42.657 07:32:07 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:07 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:42.657 07:32:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:42.657 07:32:07 -- accel/accel.sh@12 -- # build_accel_config 00:04:42.657 07:32:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:42.657 07:32:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:42.657 07:32:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:42.657 07:32:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:42.657 07:32:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:42.657 07:32:07 -- accel/accel.sh@41 -- # local IFS=, 00:04:42.657 07:32:07 -- accel/accel.sh@42 -- # jq -r . 00:04:42.657 [2024-12-02 07:32:07.862345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.657 [2024-12-02 07:32:07.862575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56153 ] 00:04:42.657 [2024-12-02 07:32:07.988560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.657 [2024-12-02 07:32:08.033985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val= 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val= 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=0x1 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val= 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val= 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=crc32c 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=32 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val= 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=software 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@23 -- # accel_module=software 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=32 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=32 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=1 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val=Yes 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val= 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:42.657 07:32:08 -- accel/accel.sh@21 -- # val= 00:04:42.657 07:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # IFS=: 00:04:42.657 07:32:08 -- accel/accel.sh@20 -- # read -r var val 00:04:43.594 07:32:09 -- accel/accel.sh@21 -- # val= 00:04:43.594 07:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # IFS=: 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # read -r var val 00:04:43.594 07:32:09 -- accel/accel.sh@21 -- # val= 00:04:43.594 07:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # IFS=: 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # read -r var val 00:04:43.594 07:32:09 -- accel/accel.sh@21 -- # val= 00:04:43.594 07:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # IFS=: 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # read -r var val 00:04:43.594 07:32:09 -- accel/accel.sh@21 -- # val= 00:04:43.594 07:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # IFS=: 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # read -r var val 00:04:43.594 07:32:09 -- accel/accel.sh@21 -- # val= 00:04:43.594 07:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # IFS=: 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # read -r var val 00:04:43.594 07:32:09 -- accel/accel.sh@21 -- # val= 00:04:43.594 07:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # IFS=: 00:04:43.594 07:32:09 -- accel/accel.sh@20 -- # read -r var val 00:04:43.594 07:32:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:43.594 07:32:09 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:04:43.594 07:32:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:43.594 ************************************ 00:04:43.594 END TEST accel_crc32c 00:04:43.594 ************************************ 00:04:43.594 00:04:43.595 real 0m2.690s 00:04:43.595 user 0m2.349s 00:04:43.595 sys 0m0.144s 00:04:43.595 07:32:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.595 07:32:09 -- common/autotest_common.sh@10 -- # set +x 00:04:43.595 07:32:09 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:43.595 07:32:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:43.595 07:32:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.595 07:32:09 -- common/autotest_common.sh@10 -- # set +x 00:04:43.854 ************************************ 00:04:43.855 START TEST accel_crc32c_C2 00:04:43.855 ************************************ 00:04:43.855 07:32:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:43.855 07:32:09 -- accel/accel.sh@16 -- # local accel_opc 00:04:43.855 07:32:09 -- accel/accel.sh@17 -- # local accel_module 00:04:43.855 07:32:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:43.855 07:32:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:43.855 07:32:09 -- accel/accel.sh@12 -- # build_accel_config 00:04:43.855 07:32:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:43.855 07:32:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:43.855 07:32:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:43.855 07:32:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:43.855 07:32:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:43.855 07:32:09 -- accel/accel.sh@41 -- # local IFS=, 00:04:43.855 07:32:09 -- accel/accel.sh@42 -- # jq -r . 00:04:43.855 [2024-12-02 07:32:09.244784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.855 [2024-12-02 07:32:09.244864] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56193 ] 00:04:43.855 [2024-12-02 07:32:09.372528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.855 [2024-12-02 07:32:09.417329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.231 07:32:10 -- accel/accel.sh@18 -- # out=' 00:04:45.231 SPDK Configuration: 00:04:45.231 Core mask: 0x1 00:04:45.231 00:04:45.231 Accel Perf Configuration: 00:04:45.231 Workload Type: crc32c 00:04:45.231 CRC-32C seed: 0 00:04:45.231 Transfer size: 4096 bytes 00:04:45.231 Vector count 2 00:04:45.231 Module: software 00:04:45.231 Queue depth: 32 00:04:45.231 Allocate depth: 32 00:04:45.231 # threads/core: 1 00:04:45.231 Run time: 1 seconds 00:04:45.231 Verify: Yes 00:04:45.231 00:04:45.231 Running for 1 seconds... 00:04:45.231 00:04:45.231 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:45.231 ------------------------------------------------------------------------------------ 00:04:45.231 0,0 432000/s 3375 MiB/s 0 0 00:04:45.231 ==================================================================================== 00:04:45.231 Total 432000/s 1687 MiB/s 0 0' 00:04:45.231 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.231 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.231 07:32:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:45.231 07:32:10 -- accel/accel.sh@12 -- # build_accel_config 00:04:45.231 07:32:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:45.231 07:32:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:45.231 07:32:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:45.231 07:32:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:45.231 07:32:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:45.231 07:32:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:45.231 07:32:10 -- accel/accel.sh@41 -- # local IFS=, 00:04:45.231 07:32:10 -- accel/accel.sh@42 -- # jq -r . 00:04:45.231 [2024-12-02 07:32:10.581345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.231 [2024-12-02 07:32:10.581438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56207 ] 00:04:45.231 [2024-12-02 07:32:10.716738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.231 [2024-12-02 07:32:10.762468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.231 07:32:10 -- accel/accel.sh@21 -- # val= 00:04:45.231 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.231 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.231 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.231 07:32:10 -- accel/accel.sh@21 -- # val= 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=0x1 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val= 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val= 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=crc32c 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=0 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val= 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=software 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@23 -- # accel_module=software 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=32 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=32 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=1 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val=Yes 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val= 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:45.232 07:32:10 -- accel/accel.sh@21 -- # val= 00:04:45.232 07:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # IFS=: 00:04:45.232 07:32:10 -- accel/accel.sh@20 -- # read -r var val 00:04:46.610 07:32:11 -- accel/accel.sh@21 -- # val= 00:04:46.610 07:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # IFS=: 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # read -r var val 00:04:46.610 07:32:11 -- accel/accel.sh@21 -- # val= 00:04:46.610 07:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # IFS=: 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # read -r var val 00:04:46.610 07:32:11 -- accel/accel.sh@21 -- # val= 00:04:46.610 07:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # IFS=: 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # read -r var val 00:04:46.610 07:32:11 -- accel/accel.sh@21 -- # val= 00:04:46.610 07:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # IFS=: 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # read -r var val 00:04:46.610 07:32:11 -- accel/accel.sh@21 -- # val= 00:04:46.610 07:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # IFS=: 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # read -r var val 00:04:46.610 07:32:11 -- accel/accel.sh@21 -- # val= 00:04:46.610 07:32:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # IFS=: 00:04:46.610 07:32:11 -- accel/accel.sh@20 -- # read -r var val 00:04:46.610 07:32:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:46.610 07:32:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:04:46.610 07:32:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:46.610 00:04:46.610 real 0m2.682s 00:04:46.610 user 0m2.363s 00:04:46.610 sys 0m0.122s 00:04:46.610 07:32:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.610 07:32:11 -- common/autotest_common.sh@10 -- # set +x 00:04:46.610 ************************************ 00:04:46.610 END TEST accel_crc32c_C2 00:04:46.610 ************************************ 00:04:46.610 07:32:11 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:46.610 07:32:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:46.610 07:32:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.610 07:32:11 -- common/autotest_common.sh@10 -- # set +x 00:04:46.610 ************************************ 00:04:46.610 START TEST accel_copy 00:04:46.610 ************************************ 00:04:46.610 07:32:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:04:46.610 07:32:11 -- accel/accel.sh@16 -- # local accel_opc 00:04:46.610 07:32:11 -- accel/accel.sh@17 -- # local accel_module 00:04:46.610 07:32:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:04:46.610 07:32:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:46.610 07:32:11 -- accel/accel.sh@12 -- # build_accel_config 00:04:46.610 07:32:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:46.610 07:32:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:46.610 07:32:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:46.610 07:32:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:46.610 07:32:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:46.610 07:32:11 -- accel/accel.sh@41 -- # local IFS=, 00:04:46.610 07:32:11 -- accel/accel.sh@42 -- # jq -r . 00:04:46.610 [2024-12-02 07:32:11.984322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.610 [2024-12-02 07:32:11.984566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56236 ] 00:04:46.610 [2024-12-02 07:32:12.119665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.610 [2024-12-02 07:32:12.170026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.987 07:32:13 -- accel/accel.sh@18 -- # out=' 00:04:47.987 SPDK Configuration: 00:04:47.987 Core mask: 0x1 00:04:47.987 00:04:47.987 Accel Perf Configuration: 00:04:47.987 Workload Type: copy 00:04:47.987 Transfer size: 4096 bytes 00:04:47.987 Vector count 1 00:04:47.987 Module: software 00:04:47.987 Queue depth: 32 00:04:47.987 Allocate depth: 32 00:04:47.987 # threads/core: 1 00:04:47.987 Run time: 1 seconds 00:04:47.987 Verify: Yes 00:04:47.987 00:04:47.987 Running for 1 seconds... 00:04:47.987 00:04:47.987 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:47.987 ------------------------------------------------------------------------------------ 00:04:47.987 0,0 389632/s 1522 MiB/s 0 0 00:04:47.987 ==================================================================================== 00:04:47.987 Total 389632/s 1522 MiB/s 0 0' 00:04:47.987 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.987 07:32:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:47.987 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.987 07:32:13 -- accel/accel.sh@12 -- # build_accel_config 00:04:47.987 07:32:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:47.987 07:32:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:47.987 07:32:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:47.987 07:32:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:47.987 07:32:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:47.987 07:32:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:47.987 07:32:13 -- accel/accel.sh@41 -- # local IFS=, 00:04:47.987 07:32:13 -- accel/accel.sh@42 -- # jq -r . 00:04:47.987 [2024-12-02 07:32:13.340071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:47.987 [2024-12-02 07:32:13.340164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56256 ] 00:04:47.988 [2024-12-02 07:32:13.474248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.988 [2024-12-02 07:32:13.521063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val= 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val= 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val=0x1 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val= 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val= 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val=copy 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@24 -- # accel_opc=copy 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val= 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val=software 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@23 -- # accel_module=software 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val=32 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val=32 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val=1 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val=Yes 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val= 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:47.988 07:32:13 -- accel/accel.sh@21 -- # val= 00:04:47.988 07:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # IFS=: 00:04:47.988 07:32:13 -- accel/accel.sh@20 -- # read -r var val 00:04:49.367 07:32:14 -- accel/accel.sh@21 -- # val= 00:04:49.367 07:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # IFS=: 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # read -r var val 00:04:49.367 07:32:14 -- accel/accel.sh@21 -- # val= 00:04:49.367 07:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # IFS=: 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # read -r var val 00:04:49.367 07:32:14 -- accel/accel.sh@21 -- # val= 00:04:49.367 07:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # IFS=: 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # read -r var val 00:04:49.367 07:32:14 -- accel/accel.sh@21 -- # val= 00:04:49.367 07:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # IFS=: 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # read -r var val 00:04:49.367 07:32:14 -- accel/accel.sh@21 -- # val= 00:04:49.367 07:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # IFS=: 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # read -r var val 00:04:49.367 07:32:14 -- accel/accel.sh@21 -- # val= 00:04:49.367 07:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # IFS=: 00:04:49.367 07:32:14 -- accel/accel.sh@20 -- # read -r var val 00:04:49.367 07:32:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:49.367 07:32:14 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:04:49.367 07:32:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:49.367 00:04:49.367 real 0m2.707s 00:04:49.367 user 0m2.367s 00:04:49.367 sys 0m0.141s 00:04:49.367 07:32:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.367 07:32:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.367 ************************************ 00:04:49.367 END TEST accel_copy 00:04:49.367 ************************************ 00:04:49.367 07:32:14 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:49.367 07:32:14 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:04:49.367 07:32:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.367 07:32:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.367 ************************************ 00:04:49.367 START TEST accel_fill 00:04:49.367 ************************************ 00:04:49.367 07:32:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:49.367 07:32:14 -- accel/accel.sh@16 -- # local accel_opc 00:04:49.367 07:32:14 -- accel/accel.sh@17 -- # local accel_module 00:04:49.367 07:32:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:49.367 07:32:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:49.367 07:32:14 -- accel/accel.sh@12 -- # build_accel_config 00:04:49.367 07:32:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:49.367 07:32:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.367 07:32:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.367 07:32:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:49.367 07:32:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:49.367 07:32:14 -- accel/accel.sh@41 -- # local IFS=, 00:04:49.367 07:32:14 -- accel/accel.sh@42 -- # jq -r . 00:04:49.367 [2024-12-02 07:32:14.740836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.367 [2024-12-02 07:32:14.741072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56290 ] 00:04:49.367 [2024-12-02 07:32:14.875731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.367 [2024-12-02 07:32:14.925766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.742 07:32:16 -- accel/accel.sh@18 -- # out=' 00:04:50.742 SPDK Configuration: 00:04:50.742 Core mask: 0x1 00:04:50.742 00:04:50.742 Accel Perf Configuration: 00:04:50.742 Workload Type: fill 00:04:50.742 Fill pattern: 0x80 00:04:50.742 Transfer size: 4096 bytes 00:04:50.742 Vector count 1 00:04:50.742 Module: software 00:04:50.742 Queue depth: 64 00:04:50.742 Allocate depth: 64 00:04:50.742 # threads/core: 1 00:04:50.742 Run time: 1 seconds 00:04:50.742 Verify: Yes 00:04:50.742 00:04:50.742 Running for 1 seconds... 00:04:50.742 00:04:50.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:50.742 ------------------------------------------------------------------------------------ 00:04:50.742 0,0 568576/s 2221 MiB/s 0 0 00:04:50.742 ==================================================================================== 00:04:50.742 Total 568576/s 2221 MiB/s 0 0' 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:50.742 07:32:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:50.742 07:32:16 -- accel/accel.sh@12 -- # build_accel_config 00:04:50.742 07:32:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:50.742 07:32:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.742 07:32:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.742 07:32:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:50.742 07:32:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:50.742 07:32:16 -- accel/accel.sh@41 -- # local IFS=, 00:04:50.742 07:32:16 -- accel/accel.sh@42 -- # jq -r . 00:04:50.742 [2024-12-02 07:32:16.092580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.742 [2024-12-02 07:32:16.092671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56304 ] 00:04:50.742 [2024-12-02 07:32:16.219799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.742 [2024-12-02 07:32:16.267526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val= 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val= 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=0x1 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val= 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val= 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=fill 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@24 -- # accel_opc=fill 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=0x80 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val= 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=software 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@23 -- # accel_module=software 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=64 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=64 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=1 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val=Yes 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val= 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:50.742 07:32:16 -- accel/accel.sh@21 -- # val= 00:04:50.742 07:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # IFS=: 00:04:50.742 07:32:16 -- accel/accel.sh@20 -- # read -r var val 00:04:52.119 07:32:17 -- accel/accel.sh@21 -- # val= 00:04:52.119 07:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # IFS=: 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # read -r var val 00:04:52.119 07:32:17 -- accel/accel.sh@21 -- # val= 00:04:52.119 07:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # IFS=: 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # read -r var val 00:04:52.119 07:32:17 -- accel/accel.sh@21 -- # val= 00:04:52.119 07:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # IFS=: 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # read -r var val 00:04:52.119 07:32:17 -- accel/accel.sh@21 -- # val= 00:04:52.119 07:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # IFS=: 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # read -r var val 00:04:52.119 07:32:17 -- accel/accel.sh@21 -- # val= 00:04:52.119 07:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # IFS=: 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # read -r var val 00:04:52.119 07:32:17 -- accel/accel.sh@21 -- # val= 00:04:52.119 07:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # IFS=: 00:04:52.119 ************************************ 00:04:52.119 END TEST accel_fill 00:04:52.119 ************************************ 00:04:52.119 07:32:17 -- accel/accel.sh@20 -- # read -r var val 00:04:52.119 07:32:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:52.119 07:32:17 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:04:52.119 07:32:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.119 00:04:52.119 real 0m2.704s 00:04:52.119 user 0m2.360s 00:04:52.119 sys 0m0.147s 00:04:52.119 07:32:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.119 07:32:17 -- common/autotest_common.sh@10 -- # set +x 00:04:52.119 07:32:17 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:52.119 07:32:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:52.119 07:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.119 07:32:17 -- common/autotest_common.sh@10 -- # set +x 00:04:52.119 ************************************ 00:04:52.119 START TEST accel_copy_crc32c 00:04:52.119 ************************************ 00:04:52.119 07:32:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:04:52.119 07:32:17 -- accel/accel.sh@16 -- # local accel_opc 00:04:52.119 07:32:17 -- accel/accel.sh@17 -- # local accel_module 00:04:52.119 07:32:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:52.119 07:32:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:52.119 07:32:17 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.119 07:32:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:52.119 07:32:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.119 07:32:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.119 07:32:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:52.119 07:32:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:52.119 07:32:17 -- accel/accel.sh@41 -- # local IFS=, 00:04:52.119 07:32:17 -- accel/accel.sh@42 -- # jq -r . 00:04:52.119 [2024-12-02 07:32:17.497676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.119 [2024-12-02 07:32:17.497942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56339 ] 00:04:52.119 [2024-12-02 07:32:17.633963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.119 [2024-12-02 07:32:17.681381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.577 07:32:18 -- accel/accel.sh@18 -- # out=' 00:04:53.577 SPDK Configuration: 00:04:53.577 Core mask: 0x1 00:04:53.577 00:04:53.577 Accel Perf Configuration: 00:04:53.577 Workload Type: copy_crc32c 00:04:53.577 CRC-32C seed: 0 00:04:53.577 Vector size: 4096 bytes 00:04:53.577 Transfer size: 4096 bytes 00:04:53.577 Vector count 1 00:04:53.577 Module: software 00:04:53.577 Queue depth: 32 00:04:53.577 Allocate depth: 32 00:04:53.577 # threads/core: 1 00:04:53.577 Run time: 1 seconds 00:04:53.577 Verify: Yes 00:04:53.577 00:04:53.577 Running for 1 seconds... 00:04:53.577 00:04:53.577 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:53.577 ------------------------------------------------------------------------------------ 00:04:53.577 0,0 304320/s 1188 MiB/s 0 0 00:04:53.577 ==================================================================================== 00:04:53.577 Total 304320/s 1188 MiB/s 0 0' 00:04:53.577 07:32:18 -- accel/accel.sh@20 -- # IFS=: 00:04:53.577 07:32:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:53.577 07:32:18 -- accel/accel.sh@20 -- # read -r var val 00:04:53.577 07:32:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:53.577 07:32:18 -- accel/accel.sh@12 -- # build_accel_config 00:04:53.577 07:32:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:53.577 07:32:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.577 07:32:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.577 07:32:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:53.577 07:32:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:53.577 07:32:18 -- accel/accel.sh@41 -- # local IFS=, 00:04:53.577 07:32:18 -- accel/accel.sh@42 -- # jq -r . 00:04:53.577 [2024-12-02 07:32:18.842176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.577 [2024-12-02 07:32:18.842435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56358 ] 00:04:53.577 [2024-12-02 07:32:18.967473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.577 [2024-12-02 07:32:19.013047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.577 07:32:19 -- accel/accel.sh@21 -- # val= 00:04:53.577 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.577 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.577 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.577 07:32:19 -- accel/accel.sh@21 -- # val= 00:04:53.577 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.577 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.577 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.577 07:32:19 -- accel/accel.sh@21 -- # val=0x1 00:04:53.577 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.577 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.577 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.577 07:32:19 -- accel/accel.sh@21 -- # val= 00:04:53.577 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val= 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val=copy_crc32c 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val=0 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val= 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val=software 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@23 -- # accel_module=software 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val=32 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val=32 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val=1 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val=Yes 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val= 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:53.578 07:32:19 -- accel/accel.sh@21 -- # val= 00:04:53.578 07:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # IFS=: 00:04:53.578 07:32:19 -- accel/accel.sh@20 -- # read -r var val 00:04:54.557 07:32:20 -- accel/accel.sh@21 -- # val= 00:04:54.557 07:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # IFS=: 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # read -r var val 00:04:54.557 07:32:20 -- accel/accel.sh@21 -- # val= 00:04:54.557 07:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # IFS=: 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # read -r var val 00:04:54.557 07:32:20 -- accel/accel.sh@21 -- # val= 00:04:54.557 07:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # IFS=: 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # read -r var val 00:04:54.557 07:32:20 -- accel/accel.sh@21 -- # val= 00:04:54.557 07:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # IFS=: 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # read -r var val 00:04:54.557 07:32:20 -- accel/accel.sh@21 -- # val= 00:04:54.557 07:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.557 07:32:20 -- accel/accel.sh@20 -- # IFS=: 00:04:54.558 07:32:20 -- accel/accel.sh@20 -- # read -r var val 00:04:54.558 07:32:20 -- accel/accel.sh@21 -- # val= 00:04:54.558 07:32:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.558 07:32:20 -- accel/accel.sh@20 -- # IFS=: 00:04:54.558 07:32:20 -- accel/accel.sh@20 -- # read -r var val 00:04:54.558 07:32:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:54.558 07:32:20 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:04:54.558 07:32:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.558 00:04:54.558 real 0m2.688s 00:04:54.558 user 0m2.371s 00:04:54.558 sys 0m0.118s 00:04:54.558 07:32:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.558 07:32:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.558 ************************************ 00:04:54.558 END TEST accel_copy_crc32c 00:04:54.558 ************************************ 00:04:54.817 07:32:20 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:54.817 07:32:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:54.817 07:32:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.817 07:32:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.817 ************************************ 00:04:54.817 START TEST accel_copy_crc32c_C2 00:04:54.817 ************************************ 00:04:54.817 07:32:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:54.817 07:32:20 -- accel/accel.sh@16 -- # local accel_opc 00:04:54.817 07:32:20 -- accel/accel.sh@17 -- # local accel_module 00:04:54.817 07:32:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:54.817 07:32:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:54.817 07:32:20 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.817 07:32:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:54.817 07:32:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.817 07:32:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.817 07:32:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:54.817 07:32:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:54.817 07:32:20 -- accel/accel.sh@41 -- # local IFS=, 00:04:54.817 07:32:20 -- accel/accel.sh@42 -- # jq -r . 00:04:54.817 [2024-12-02 07:32:20.240404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.817 [2024-12-02 07:32:20.240498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56387 ] 00:04:54.817 [2024-12-02 07:32:20.375848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.817 [2024-12-02 07:32:20.430453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.195 07:32:21 -- accel/accel.sh@18 -- # out=' 00:04:56.195 SPDK Configuration: 00:04:56.195 Core mask: 0x1 00:04:56.195 00:04:56.195 Accel Perf Configuration: 00:04:56.195 Workload Type: copy_crc32c 00:04:56.195 CRC-32C seed: 0 00:04:56.195 Vector size: 4096 bytes 00:04:56.195 Transfer size: 8192 bytes 00:04:56.195 Vector count 2 00:04:56.195 Module: software 00:04:56.195 Queue depth: 32 00:04:56.195 Allocate depth: 32 00:04:56.195 # threads/core: 1 00:04:56.195 Run time: 1 seconds 00:04:56.195 Verify: Yes 00:04:56.195 00:04:56.195 Running for 1 seconds... 00:04:56.195 00:04:56.195 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:56.195 ------------------------------------------------------------------------------------ 00:04:56.195 0,0 221152/s 1727 MiB/s 0 0 00:04:56.195 ==================================================================================== 00:04:56.195 Total 221152/s 863 MiB/s 0 0' 00:04:56.195 07:32:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:56.195 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.195 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.195 07:32:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:56.195 07:32:21 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.195 07:32:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:56.195 07:32:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.195 07:32:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.195 07:32:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:56.195 07:32:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:56.195 07:32:21 -- accel/accel.sh@41 -- # local IFS=, 00:04:56.195 07:32:21 -- accel/accel.sh@42 -- # jq -r . 00:04:56.195 [2024-12-02 07:32:21.597454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.195 [2024-12-02 07:32:21.597539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56407 ] 00:04:56.195 [2024-12-02 07:32:21.733595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.195 [2024-12-02 07:32:21.781225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.195 07:32:21 -- accel/accel.sh@21 -- # val= 00:04:56.195 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.195 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.195 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val= 00:04:56.196 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val=0x1 00:04:56.196 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val= 00:04:56.196 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val= 00:04:56.196 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:04:56.196 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.196 07:32:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val=0 00:04:56.196 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:56.196 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.196 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.196 07:32:21 -- accel/accel.sh@21 -- # val='8192 bytes' 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val= 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val=software 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@23 -- # accel_module=software 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val=32 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val=32 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val=1 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val=Yes 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val= 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:56.454 07:32:21 -- accel/accel.sh@21 -- # val= 00:04:56.454 07:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # IFS=: 00:04:56.454 07:32:21 -- accel/accel.sh@20 -- # read -r var val 00:04:57.389 07:32:22 -- accel/accel.sh@21 -- # val= 00:04:57.389 07:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # IFS=: 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # read -r var val 00:04:57.389 07:32:22 -- accel/accel.sh@21 -- # val= 00:04:57.389 07:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # IFS=: 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # read -r var val 00:04:57.389 07:32:22 -- accel/accel.sh@21 -- # val= 00:04:57.389 07:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # IFS=: 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # read -r var val 00:04:57.389 07:32:22 -- accel/accel.sh@21 -- # val= 00:04:57.389 07:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # IFS=: 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # read -r var val 00:04:57.389 07:32:22 -- accel/accel.sh@21 -- # val= 00:04:57.389 07:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # IFS=: 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # read -r var val 00:04:57.389 07:32:22 -- accel/accel.sh@21 -- # val= 00:04:57.389 07:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # IFS=: 00:04:57.389 07:32:22 -- accel/accel.sh@20 -- # read -r var val 00:04:57.389 07:32:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:57.389 07:32:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:04:57.389 07:32:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.389 00:04:57.389 real 0m2.709s 00:04:57.389 user 0m2.380s 00:04:57.389 sys 0m0.130s 00:04:57.389 07:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.389 07:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.389 ************************************ 00:04:57.389 END TEST accel_copy_crc32c_C2 00:04:57.389 ************************************ 00:04:57.389 07:32:22 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:57.389 07:32:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:57.389 07:32:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.389 07:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.389 ************************************ 00:04:57.389 START TEST accel_dualcast 00:04:57.389 ************************************ 00:04:57.389 07:32:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:04:57.389 07:32:22 -- accel/accel.sh@16 -- # local accel_opc 00:04:57.389 07:32:22 -- accel/accel.sh@17 -- # local accel_module 00:04:57.389 07:32:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:04:57.389 07:32:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:04:57.389 07:32:22 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.389 07:32:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.389 07:32:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.389 07:32:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.389 07:32:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.389 07:32:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.389 07:32:22 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.389 07:32:22 -- accel/accel.sh@42 -- # jq -r . 00:04:57.389 [2024-12-02 07:32:22.999517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.389 [2024-12-02 07:32:22.999604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56441 ] 00:04:57.648 [2024-12-02 07:32:23.134376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.648 [2024-12-02 07:32:23.184963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.057 07:32:24 -- accel/accel.sh@18 -- # out=' 00:04:59.057 SPDK Configuration: 00:04:59.057 Core mask: 0x1 00:04:59.057 00:04:59.057 Accel Perf Configuration: 00:04:59.057 Workload Type: dualcast 00:04:59.057 Transfer size: 4096 bytes 00:04:59.058 Vector count 1 00:04:59.058 Module: software 00:04:59.058 Queue depth: 32 00:04:59.058 Allocate depth: 32 00:04:59.058 # threads/core: 1 00:04:59.058 Run time: 1 seconds 00:04:59.058 Verify: Yes 00:04:59.058 00:04:59.058 Running for 1 seconds... 00:04:59.058 00:04:59.058 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:59.058 ------------------------------------------------------------------------------------ 00:04:59.058 0,0 423904/s 1655 MiB/s 0 0 00:04:59.058 ==================================================================================== 00:04:59.058 Total 423904/s 1655 MiB/s 0 0' 00:04:59.058 07:32:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:04:59.058 07:32:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.058 07:32:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:59.058 07:32:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.058 07:32:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.058 07:32:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:59.058 07:32:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:59.058 07:32:24 -- accel/accel.sh@41 -- # local IFS=, 00:04:59.058 07:32:24 -- accel/accel.sh@42 -- # jq -r . 00:04:59.058 [2024-12-02 07:32:24.343676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.058 [2024-12-02 07:32:24.343767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56455 ] 00:04:59.058 [2024-12-02 07:32:24.466466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.058 [2024-12-02 07:32:24.515503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val= 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val= 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val=0x1 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val= 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val= 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val=dualcast 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val= 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val=software 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@23 -- # accel_module=software 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val=32 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val=32 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val=1 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val=Yes 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val= 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:04:59.058 07:32:24 -- accel/accel.sh@21 -- # val= 00:04:59.058 07:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # IFS=: 00:04:59.058 07:32:24 -- accel/accel.sh@20 -- # read -r var val 00:05:00.437 07:32:25 -- accel/accel.sh@21 -- # val= 00:05:00.437 07:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # IFS=: 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # read -r var val 00:05:00.437 07:32:25 -- accel/accel.sh@21 -- # val= 00:05:00.437 07:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # IFS=: 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # read -r var val 00:05:00.437 07:32:25 -- accel/accel.sh@21 -- # val= 00:05:00.437 07:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # IFS=: 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # read -r var val 00:05:00.437 07:32:25 -- accel/accel.sh@21 -- # val= 00:05:00.437 07:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # IFS=: 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # read -r var val 00:05:00.437 07:32:25 -- accel/accel.sh@21 -- # val= 00:05:00.437 07:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # IFS=: 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # read -r var val 00:05:00.437 07:32:25 -- accel/accel.sh@21 -- # val= 00:05:00.437 07:32:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # IFS=: 00:05:00.437 07:32:25 -- accel/accel.sh@20 -- # read -r var val 00:05:00.437 07:32:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:00.437 07:32:25 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:00.437 07:32:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.437 00:05:00.437 real 0m2.683s 00:05:00.437 user 0m2.358s 00:05:00.437 sys 0m0.128s 00:05:00.437 07:32:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.437 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.437 ************************************ 00:05:00.437 END TEST accel_dualcast 00:05:00.437 ************************************ 00:05:00.437 07:32:25 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:00.437 07:32:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:00.437 07:32:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.437 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.437 ************************************ 00:05:00.437 START TEST accel_compare 00:05:00.437 ************************************ 00:05:00.437 07:32:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:05:00.437 07:32:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:00.437 07:32:25 -- accel/accel.sh@17 -- # local accel_module 00:05:00.437 07:32:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:00.437 07:32:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:00.437 07:32:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.437 07:32:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:00.437 07:32:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.437 07:32:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.437 07:32:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:00.437 07:32:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:00.437 07:32:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:00.437 07:32:25 -- accel/accel.sh@42 -- # jq -r . 00:05:00.437 [2024-12-02 07:32:25.731697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.437 [2024-12-02 07:32:25.731786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56490 ] 00:05:00.438 [2024-12-02 07:32:25.866514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.438 [2024-12-02 07:32:25.920750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.814 07:32:27 -- accel/accel.sh@18 -- # out=' 00:05:01.814 SPDK Configuration: 00:05:01.814 Core mask: 0x1 00:05:01.814 00:05:01.814 Accel Perf Configuration: 00:05:01.814 Workload Type: compare 00:05:01.814 Transfer size: 4096 bytes 00:05:01.814 Vector count 1 00:05:01.814 Module: software 00:05:01.814 Queue depth: 32 00:05:01.814 Allocate depth: 32 00:05:01.814 # threads/core: 1 00:05:01.814 Run time: 1 seconds 00:05:01.814 Verify: Yes 00:05:01.814 00:05:01.814 Running for 1 seconds... 00:05:01.814 00:05:01.814 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:01.814 ------------------------------------------------------------------------------------ 00:05:01.814 0,0 565760/s 2210 MiB/s 0 0 00:05:01.814 ==================================================================================== 00:05:01.814 Total 565760/s 2210 MiB/s 0 0' 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:01.814 07:32:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:01.814 07:32:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:01.814 07:32:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:01.814 07:32:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.814 07:32:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.814 07:32:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:01.814 07:32:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:01.814 07:32:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:01.814 07:32:27 -- accel/accel.sh@42 -- # jq -r . 00:05:01.814 [2024-12-02 07:32:27.090062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.814 [2024-12-02 07:32:27.090164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56509 ] 00:05:01.814 [2024-12-02 07:32:27.218622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.814 [2024-12-02 07:32:27.263329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val= 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val= 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val=0x1 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val= 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val= 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val=compare 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val= 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val=software 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@23 -- # accel_module=software 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val=32 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val=32 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val=1 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val=Yes 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val= 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:01.814 07:32:27 -- accel/accel.sh@21 -- # val= 00:05:01.814 07:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # IFS=: 00:05:01.814 07:32:27 -- accel/accel.sh@20 -- # read -r var val 00:05:03.193 07:32:28 -- accel/accel.sh@21 -- # val= 00:05:03.193 07:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # IFS=: 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # read -r var val 00:05:03.193 07:32:28 -- accel/accel.sh@21 -- # val= 00:05:03.193 07:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # IFS=: 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # read -r var val 00:05:03.193 07:32:28 -- accel/accel.sh@21 -- # val= 00:05:03.193 07:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # IFS=: 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # read -r var val 00:05:03.193 07:32:28 -- accel/accel.sh@21 -- # val= 00:05:03.193 07:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # IFS=: 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # read -r var val 00:05:03.193 07:32:28 -- accel/accel.sh@21 -- # val= 00:05:03.193 07:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # IFS=: 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # read -r var val 00:05:03.193 07:32:28 -- accel/accel.sh@21 -- # val= 00:05:03.193 07:32:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # IFS=: 00:05:03.193 07:32:28 -- accel/accel.sh@20 -- # read -r var val 00:05:03.193 07:32:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:03.193 07:32:28 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:03.193 07:32:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.193 00:05:03.193 real 0m2.696s 00:05:03.193 user 0m2.369s 00:05:03.193 sys 0m0.128s 00:05:03.193 07:32:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.193 07:32:28 -- common/autotest_common.sh@10 -- # set +x 00:05:03.193 ************************************ 00:05:03.193 END TEST accel_compare 00:05:03.193 ************************************ 00:05:03.193 07:32:28 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:03.193 07:32:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:03.193 07:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.193 07:32:28 -- common/autotest_common.sh@10 -- # set +x 00:05:03.193 ************************************ 00:05:03.193 START TEST accel_xor 00:05:03.193 ************************************ 00:05:03.193 07:32:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:05:03.193 07:32:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:03.193 07:32:28 -- accel/accel.sh@17 -- # local accel_module 00:05:03.193 07:32:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:03.193 07:32:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:03.193 07:32:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:03.193 07:32:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:03.193 07:32:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.193 07:32:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.194 07:32:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:03.194 07:32:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:03.194 07:32:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:03.194 07:32:28 -- accel/accel.sh@42 -- # jq -r . 00:05:03.194 [2024-12-02 07:32:28.478047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:03.194 [2024-12-02 07:32:28.478140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56538 ] 00:05:03.194 [2024-12-02 07:32:28.614837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.194 [2024-12-02 07:32:28.668597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.573 07:32:29 -- accel/accel.sh@18 -- # out=' 00:05:04.573 SPDK Configuration: 00:05:04.573 Core mask: 0x1 00:05:04.574 00:05:04.574 Accel Perf Configuration: 00:05:04.574 Workload Type: xor 00:05:04.574 Source buffers: 2 00:05:04.574 Transfer size: 4096 bytes 00:05:04.574 Vector count 1 00:05:04.574 Module: software 00:05:04.574 Queue depth: 32 00:05:04.574 Allocate depth: 32 00:05:04.574 # threads/core: 1 00:05:04.574 Run time: 1 seconds 00:05:04.574 Verify: Yes 00:05:04.574 00:05:04.574 Running for 1 seconds... 00:05:04.574 00:05:04.574 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:04.574 ------------------------------------------------------------------------------------ 00:05:04.574 0,0 298816/s 1167 MiB/s 0 0 00:05:04.574 ==================================================================================== 00:05:04.574 Total 298816/s 1167 MiB/s 0 0' 00:05:04.574 07:32:29 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:04.574 07:32:29 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.574 07:32:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:04.574 07:32:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:04.574 07:32:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.574 07:32:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.574 07:32:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:04.574 07:32:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:04.574 07:32:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:04.574 07:32:29 -- accel/accel.sh@42 -- # jq -r . 00:05:04.574 [2024-12-02 07:32:29.837912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.574 [2024-12-02 07:32:29.838022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56558 ] 00:05:04.574 [2024-12-02 07:32:29.964766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.574 [2024-12-02 07:32:30.011194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val= 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val= 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=0x1 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val= 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val= 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=xor 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=2 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val= 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=software 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@23 -- # accel_module=software 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=32 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=32 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=1 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val=Yes 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val= 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:04.574 07:32:30 -- accel/accel.sh@21 -- # val= 00:05:04.574 07:32:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # IFS=: 00:05:04.574 07:32:30 -- accel/accel.sh@20 -- # read -r var val 00:05:05.953 07:32:31 -- accel/accel.sh@21 -- # val= 00:05:05.954 07:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # IFS=: 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # read -r var val 00:05:05.954 07:32:31 -- accel/accel.sh@21 -- # val= 00:05:05.954 07:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # IFS=: 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # read -r var val 00:05:05.954 07:32:31 -- accel/accel.sh@21 -- # val= 00:05:05.954 07:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # IFS=: 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # read -r var val 00:05:05.954 07:32:31 -- accel/accel.sh@21 -- # val= 00:05:05.954 07:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # IFS=: 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # read -r var val 00:05:05.954 07:32:31 -- accel/accel.sh@21 -- # val= 00:05:05.954 07:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # IFS=: 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # read -r var val 00:05:05.954 07:32:31 -- accel/accel.sh@21 -- # val= 00:05:05.954 07:32:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # IFS=: 00:05:05.954 07:32:31 -- accel/accel.sh@20 -- # read -r var val 00:05:05.954 07:32:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:05.954 07:32:31 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:05.954 07:32:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.954 00:05:05.954 real 0m2.707s 00:05:05.954 user 0m2.373s 00:05:05.954 sys 0m0.138s 00:05:05.954 07:32:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.954 07:32:31 -- common/autotest_common.sh@10 -- # set +x 00:05:05.954 ************************************ 00:05:05.954 END TEST accel_xor 00:05:05.954 ************************************ 00:05:05.954 07:32:31 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:05.954 07:32:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:05.954 07:32:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.954 07:32:31 -- common/autotest_common.sh@10 -- # set +x 00:05:05.954 ************************************ 00:05:05.954 START TEST accel_xor 00:05:05.954 ************************************ 00:05:05.954 07:32:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:05:05.954 07:32:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.954 07:32:31 -- accel/accel.sh@17 -- # local accel_module 00:05:05.954 07:32:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:05.954 07:32:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:05.954 07:32:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.954 07:32:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:05.954 07:32:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.954 07:32:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.954 07:32:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:05.954 07:32:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:05.954 07:32:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:05.954 07:32:31 -- accel/accel.sh@42 -- # jq -r . 00:05:05.954 [2024-12-02 07:32:31.230214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.954 [2024-12-02 07:32:31.230320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56592 ] 00:05:05.954 [2024-12-02 07:32:31.356358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.954 [2024-12-02 07:32:31.401792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.333 07:32:32 -- accel/accel.sh@18 -- # out=' 00:05:07.333 SPDK Configuration: 00:05:07.333 Core mask: 0x1 00:05:07.333 00:05:07.333 Accel Perf Configuration: 00:05:07.333 Workload Type: xor 00:05:07.333 Source buffers: 3 00:05:07.333 Transfer size: 4096 bytes 00:05:07.333 Vector count 1 00:05:07.333 Module: software 00:05:07.333 Queue depth: 32 00:05:07.333 Allocate depth: 32 00:05:07.333 # threads/core: 1 00:05:07.333 Run time: 1 seconds 00:05:07.333 Verify: Yes 00:05:07.333 00:05:07.333 Running for 1 seconds... 00:05:07.333 00:05:07.333 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:07.333 ------------------------------------------------------------------------------------ 00:05:07.333 0,0 282752/s 1104 MiB/s 0 0 00:05:07.333 ==================================================================================== 00:05:07.333 Total 282752/s 1104 MiB/s 0 0' 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:07.333 07:32:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.333 07:32:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.333 07:32:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.333 07:32:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.333 07:32:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.333 07:32:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.333 07:32:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.333 07:32:32 -- accel/accel.sh@42 -- # jq -r . 00:05:07.333 [2024-12-02 07:32:32.576769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.333 [2024-12-02 07:32:32.576892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56606 ] 00:05:07.333 [2024-12-02 07:32:32.721716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.333 [2024-12-02 07:32:32.773638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val= 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val= 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=0x1 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val= 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val= 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=xor 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=3 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val= 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=software 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@23 -- # accel_module=software 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=32 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=32 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=1 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val=Yes 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val= 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:07.333 07:32:32 -- accel/accel.sh@21 -- # val= 00:05:07.333 07:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # IFS=: 00:05:07.333 07:32:32 -- accel/accel.sh@20 -- # read -r var val 00:05:08.712 07:32:33 -- accel/accel.sh@21 -- # val= 00:05:08.712 07:32:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # IFS=: 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # read -r var val 00:05:08.712 07:32:33 -- accel/accel.sh@21 -- # val= 00:05:08.712 07:32:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # IFS=: 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # read -r var val 00:05:08.712 07:32:33 -- accel/accel.sh@21 -- # val= 00:05:08.712 07:32:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # IFS=: 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # read -r var val 00:05:08.712 07:32:33 -- accel/accel.sh@21 -- # val= 00:05:08.712 07:32:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # IFS=: 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # read -r var val 00:05:08.712 07:32:33 -- accel/accel.sh@21 -- # val= 00:05:08.712 07:32:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # IFS=: 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # read -r var val 00:05:08.712 07:32:33 -- accel/accel.sh@21 -- # val= 00:05:08.712 07:32:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # IFS=: 00:05:08.712 07:32:33 -- accel/accel.sh@20 -- # read -r var val 00:05:08.712 07:32:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:08.712 ************************************ 00:05:08.712 END TEST accel_xor 00:05:08.712 ************************************ 00:05:08.712 07:32:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:08.712 07:32:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.712 00:05:08.712 real 0m2.717s 00:05:08.712 user 0m2.368s 00:05:08.712 sys 0m0.147s 00:05:08.712 07:32:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.712 07:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:08.712 07:32:33 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:08.712 07:32:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:08.712 07:32:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.712 07:32:33 -- common/autotest_common.sh@10 -- # set +x 00:05:08.712 ************************************ 00:05:08.712 START TEST accel_dif_verify 00:05:08.712 ************************************ 00:05:08.712 07:32:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:05:08.712 07:32:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:08.712 07:32:33 -- accel/accel.sh@17 -- # local accel_module 00:05:08.712 07:32:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:08.712 07:32:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:08.712 07:32:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.712 07:32:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.712 07:32:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.712 07:32:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.712 07:32:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.712 07:32:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.712 07:32:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.712 07:32:33 -- accel/accel.sh@42 -- # jq -r . 00:05:08.712 [2024-12-02 07:32:34.006681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.712 [2024-12-02 07:32:34.006774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56641 ] 00:05:08.712 [2024-12-02 07:32:34.142062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.712 [2024-12-02 07:32:34.192236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.099 07:32:35 -- accel/accel.sh@18 -- # out=' 00:05:10.099 SPDK Configuration: 00:05:10.099 Core mask: 0x1 00:05:10.099 00:05:10.099 Accel Perf Configuration: 00:05:10.099 Workload Type: dif_verify 00:05:10.099 Vector size: 4096 bytes 00:05:10.099 Transfer size: 4096 bytes 00:05:10.099 Block size: 512 bytes 00:05:10.099 Metadata size: 8 bytes 00:05:10.099 Vector count 1 00:05:10.099 Module: software 00:05:10.099 Queue depth: 32 00:05:10.099 Allocate depth: 32 00:05:10.099 # threads/core: 1 00:05:10.099 Run time: 1 seconds 00:05:10.099 Verify: No 00:05:10.099 00:05:10.099 Running for 1 seconds... 00:05:10.099 00:05:10.099 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:10.099 ------------------------------------------------------------------------------------ 00:05:10.099 0,0 123936/s 491 MiB/s 0 0 00:05:10.099 ==================================================================================== 00:05:10.099 Total 123936/s 484 MiB/s 0 0' 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.099 07:32:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:10.099 07:32:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:10.099 07:32:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.099 07:32:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.099 07:32:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:10.099 07:32:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:10.099 07:32:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:10.099 07:32:35 -- accel/accel.sh@42 -- # jq -r . 00:05:10.099 [2024-12-02 07:32:35.363166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.099 [2024-12-02 07:32:35.363258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56656 ] 00:05:10.099 [2024-12-02 07:32:35.497774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.099 [2024-12-02 07:32:35.543475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val= 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val= 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val=0x1 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val= 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val= 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val=dif_verify 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val= 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val=software 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@23 -- # accel_module=software 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val=32 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val=32 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val=1 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val=No 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val= 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:10.099 07:32:35 -- accel/accel.sh@21 -- # val= 00:05:10.099 07:32:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # IFS=: 00:05:10.099 07:32:35 -- accel/accel.sh@20 -- # read -r var val 00:05:11.477 07:32:36 -- accel/accel.sh@21 -- # val= 00:05:11.477 07:32:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # IFS=: 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # read -r var val 00:05:11.477 07:32:36 -- accel/accel.sh@21 -- # val= 00:05:11.477 07:32:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # IFS=: 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # read -r var val 00:05:11.477 07:32:36 -- accel/accel.sh@21 -- # val= 00:05:11.477 07:32:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # IFS=: 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # read -r var val 00:05:11.477 07:32:36 -- accel/accel.sh@21 -- # val= 00:05:11.477 07:32:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # IFS=: 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # read -r var val 00:05:11.477 07:32:36 -- accel/accel.sh@21 -- # val= 00:05:11.477 07:32:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # IFS=: 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # read -r var val 00:05:11.477 07:32:36 -- accel/accel.sh@21 -- # val= 00:05:11.477 07:32:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # IFS=: 00:05:11.477 07:32:36 -- accel/accel.sh@20 -- # read -r var val 00:05:11.477 07:32:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:11.477 07:32:36 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:11.477 07:32:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.477 00:05:11.477 real 0m2.713s 00:05:11.477 user 0m2.369s 00:05:11.477 sys 0m0.143s 00:05:11.477 07:32:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.477 07:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:11.477 ************************************ 00:05:11.477 END TEST accel_dif_verify 00:05:11.478 ************************************ 00:05:11.478 07:32:36 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:11.478 07:32:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:11.478 07:32:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.478 07:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:11.478 ************************************ 00:05:11.478 START TEST accel_dif_generate 00:05:11.478 ************************************ 00:05:11.478 07:32:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:05:11.478 07:32:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:11.478 07:32:36 -- accel/accel.sh@17 -- # local accel_module 00:05:11.478 07:32:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:11.478 07:32:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:11.478 07:32:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.478 07:32:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.478 07:32:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.478 07:32:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.478 07:32:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.478 07:32:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.478 07:32:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.478 07:32:36 -- accel/accel.sh@42 -- # jq -r . 00:05:11.478 [2024-12-02 07:32:36.767476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.478 [2024-12-02 07:32:36.767565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56695 ] 00:05:11.478 [2024-12-02 07:32:36.902228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.478 [2024-12-02 07:32:36.950276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.857 07:32:38 -- accel/accel.sh@18 -- # out=' 00:05:12.857 SPDK Configuration: 00:05:12.857 Core mask: 0x1 00:05:12.857 00:05:12.857 Accel Perf Configuration: 00:05:12.857 Workload Type: dif_generate 00:05:12.857 Vector size: 4096 bytes 00:05:12.857 Transfer size: 4096 bytes 00:05:12.857 Block size: 512 bytes 00:05:12.857 Metadata size: 8 bytes 00:05:12.857 Vector count 1 00:05:12.857 Module: software 00:05:12.857 Queue depth: 32 00:05:12.857 Allocate depth: 32 00:05:12.857 # threads/core: 1 00:05:12.857 Run time: 1 seconds 00:05:12.857 Verify: No 00:05:12.857 00:05:12.857 Running for 1 seconds... 00:05:12.857 00:05:12.857 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:12.857 ------------------------------------------------------------------------------------ 00:05:12.857 0,0 152096/s 603 MiB/s 0 0 00:05:12.857 ==================================================================================== 00:05:12.857 Total 152096/s 594 MiB/s 0 0' 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.857 07:32:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.857 07:32:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:12.857 07:32:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.857 07:32:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:12.857 07:32:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.857 07:32:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.857 07:32:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:12.857 07:32:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:12.857 07:32:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:12.857 07:32:38 -- accel/accel.sh@42 -- # jq -r . 00:05:12.857 [2024-12-02 07:32:38.119379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.857 [2024-12-02 07:32:38.119470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56709 ] 00:05:12.857 [2024-12-02 07:32:38.253717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.857 [2024-12-02 07:32:38.301914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.857 07:32:38 -- accel/accel.sh@21 -- # val= 00:05:12.857 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.857 07:32:38 -- accel/accel.sh@21 -- # val= 00:05:12.857 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.857 07:32:38 -- accel/accel.sh@21 -- # val=0x1 00:05:12.857 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.857 07:32:38 -- accel/accel.sh@21 -- # val= 00:05:12.857 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.857 07:32:38 -- accel/accel.sh@21 -- # val= 00:05:12.857 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.857 07:32:38 -- accel/accel.sh@21 -- # val=dif_generate 00:05:12.857 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.857 07:32:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.857 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val= 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val=software 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@23 -- # accel_module=software 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val=32 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val=32 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val=1 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val=No 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val= 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:12.858 07:32:38 -- accel/accel.sh@21 -- # val= 00:05:12.858 07:32:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # IFS=: 00:05:12.858 07:32:38 -- accel/accel.sh@20 -- # read -r var val 00:05:14.236 07:32:39 -- accel/accel.sh@21 -- # val= 00:05:14.236 07:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # IFS=: 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # read -r var val 00:05:14.236 07:32:39 -- accel/accel.sh@21 -- # val= 00:05:14.236 07:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # IFS=: 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # read -r var val 00:05:14.236 07:32:39 -- accel/accel.sh@21 -- # val= 00:05:14.236 07:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # IFS=: 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # read -r var val 00:05:14.236 07:32:39 -- accel/accel.sh@21 -- # val= 00:05:14.236 07:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # IFS=: 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # read -r var val 00:05:14.236 07:32:39 -- accel/accel.sh@21 -- # val= 00:05:14.236 07:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # IFS=: 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # read -r var val 00:05:14.236 07:32:39 -- accel/accel.sh@21 -- # val= 00:05:14.236 07:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # IFS=: 00:05:14.236 07:32:39 -- accel/accel.sh@20 -- # read -r var val 00:05:14.236 07:32:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:14.236 07:32:39 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:14.236 07:32:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.236 00:05:14.236 real 0m2.711s 00:05:14.236 user 0m2.361s 00:05:14.236 sys 0m0.149s 00:05:14.236 07:32:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.236 ************************************ 00:05:14.236 END TEST accel_dif_generate 00:05:14.236 ************************************ 00:05:14.236 07:32:39 -- common/autotest_common.sh@10 -- # set +x 00:05:14.236 07:32:39 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:14.236 07:32:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:14.236 07:32:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.236 07:32:39 -- common/autotest_common.sh@10 -- # set +x 00:05:14.236 ************************************ 00:05:14.236 START TEST accel_dif_generate_copy 00:05:14.236 ************************************ 00:05:14.236 07:32:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:05:14.236 07:32:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:14.236 07:32:39 -- accel/accel.sh@17 -- # local accel_module 00:05:14.236 07:32:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:14.236 07:32:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:14.236 07:32:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.236 07:32:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:14.236 07:32:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.236 07:32:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.236 07:32:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:14.236 07:32:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:14.236 07:32:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:14.236 07:32:39 -- accel/accel.sh@42 -- # jq -r . 00:05:14.236 [2024-12-02 07:32:39.526604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:14.236 [2024-12-02 07:32:39.526679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56738 ] 00:05:14.236 [2024-12-02 07:32:39.655501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.237 [2024-12-02 07:32:39.705839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.614 07:32:40 -- accel/accel.sh@18 -- # out=' 00:05:15.614 SPDK Configuration: 00:05:15.614 Core mask: 0x1 00:05:15.614 00:05:15.614 Accel Perf Configuration: 00:05:15.614 Workload Type: dif_generate_copy 00:05:15.614 Vector size: 4096 bytes 00:05:15.614 Transfer size: 4096 bytes 00:05:15.614 Vector count 1 00:05:15.614 Module: software 00:05:15.614 Queue depth: 32 00:05:15.614 Allocate depth: 32 00:05:15.614 # threads/core: 1 00:05:15.614 Run time: 1 seconds 00:05:15.614 Verify: No 00:05:15.614 00:05:15.614 Running for 1 seconds... 00:05:15.614 00:05:15.614 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:15.614 ------------------------------------------------------------------------------------ 00:05:15.614 0,0 115552/s 458 MiB/s 0 0 00:05:15.614 ==================================================================================== 00:05:15.614 Total 115552/s 451 MiB/s 0 0' 00:05:15.614 07:32:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:15.614 07:32:40 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:40 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:15.614 07:32:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.614 07:32:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.614 07:32:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.614 07:32:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.614 07:32:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.614 07:32:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.614 07:32:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.614 07:32:40 -- accel/accel.sh@42 -- # jq -r . 00:05:15.614 [2024-12-02 07:32:40.876127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.614 [2024-12-02 07:32:40.876217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56763 ] 00:05:15.614 [2024-12-02 07:32:41.010435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.614 [2024-12-02 07:32:41.059132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val= 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val= 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val=0x1 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val= 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val= 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val= 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val=software 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@23 -- # accel_module=software 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val=32 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val=32 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val=1 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.614 07:32:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:15.614 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.614 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.615 07:32:41 -- accel/accel.sh@21 -- # val=No 00:05:15.615 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.615 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.615 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.615 07:32:41 -- accel/accel.sh@21 -- # val= 00:05:15.615 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.615 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.615 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:15.615 07:32:41 -- accel/accel.sh@21 -- # val= 00:05:15.615 07:32:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.615 07:32:41 -- accel/accel.sh@20 -- # IFS=: 00:05:15.615 07:32:41 -- accel/accel.sh@20 -- # read -r var val 00:05:16.991 07:32:42 -- accel/accel.sh@21 -- # val= 00:05:16.991 07:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # IFS=: 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # read -r var val 00:05:16.991 07:32:42 -- accel/accel.sh@21 -- # val= 00:05:16.991 07:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # IFS=: 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # read -r var val 00:05:16.991 07:32:42 -- accel/accel.sh@21 -- # val= 00:05:16.991 07:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # IFS=: 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # read -r var val 00:05:16.991 07:32:42 -- accel/accel.sh@21 -- # val= 00:05:16.991 07:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # IFS=: 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # read -r var val 00:05:16.991 07:32:42 -- accel/accel.sh@21 -- # val= 00:05:16.991 07:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # IFS=: 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # read -r var val 00:05:16.991 07:32:42 -- accel/accel.sh@21 -- # val= 00:05:16.991 07:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # IFS=: 00:05:16.991 07:32:42 -- accel/accel.sh@20 -- # read -r var val 00:05:16.991 07:32:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:16.991 07:32:42 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:16.991 07:32:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.991 00:05:16.991 real 0m2.700s 00:05:16.991 user 0m2.369s 00:05:16.991 sys 0m0.130s 00:05:16.991 07:32:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.991 07:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:16.991 ************************************ 00:05:16.991 END TEST accel_dif_generate_copy 00:05:16.991 ************************************ 00:05:16.991 07:32:42 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:16.991 07:32:42 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:16.991 07:32:42 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:16.991 07:32:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.991 07:32:42 -- common/autotest_common.sh@10 -- # set +x 00:05:16.991 ************************************ 00:05:16.991 START TEST accel_comp 00:05:16.991 ************************************ 00:05:16.991 07:32:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:16.991 07:32:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:16.991 07:32:42 -- accel/accel.sh@17 -- # local accel_module 00:05:16.991 07:32:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:16.991 07:32:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:16.991 07:32:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.991 07:32:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:16.991 07:32:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.991 07:32:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.992 07:32:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:16.992 07:32:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:16.992 07:32:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:16.992 07:32:42 -- accel/accel.sh@42 -- # jq -r . 00:05:16.992 [2024-12-02 07:32:42.284032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.992 [2024-12-02 07:32:42.284125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56792 ] 00:05:16.992 [2024-12-02 07:32:42.419043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.992 [2024-12-02 07:32:42.469115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.372 07:32:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:18.372 00:05:18.372 SPDK Configuration: 00:05:18.372 Core mask: 0x1 00:05:18.372 00:05:18.372 Accel Perf Configuration: 00:05:18.372 Workload Type: compress 00:05:18.372 Transfer size: 4096 bytes 00:05:18.372 Vector count 1 00:05:18.372 Module: software 00:05:18.372 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.372 Queue depth: 32 00:05:18.372 Allocate depth: 32 00:05:18.372 # threads/core: 1 00:05:18.372 Run time: 1 seconds 00:05:18.372 Verify: No 00:05:18.372 00:05:18.372 Running for 1 seconds... 00:05:18.372 00:05:18.372 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:18.372 ------------------------------------------------------------------------------------ 00:05:18.372 0,0 59648/s 248 MiB/s 0 0 00:05:18.372 ==================================================================================== 00:05:18.372 Total 59648/s 233 MiB/s 0 0' 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.372 07:32:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.372 07:32:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.372 07:32:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.372 07:32:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.372 07:32:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.372 07:32:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.372 07:32:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.372 07:32:43 -- accel/accel.sh@42 -- # jq -r . 00:05:18.372 [2024-12-02 07:32:43.639313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.372 [2024-12-02 07:32:43.639404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56806 ] 00:05:18.372 [2024-12-02 07:32:43.775609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.372 [2024-12-02 07:32:43.824163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=0x1 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=compress 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=software 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@23 -- # accel_module=software 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=32 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=32 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=1 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val=No 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:18.372 07:32:43 -- accel/accel.sh@21 -- # val= 00:05:18.372 07:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # IFS=: 00:05:18.372 07:32:43 -- accel/accel.sh@20 -- # read -r var val 00:05:19.748 07:32:44 -- accel/accel.sh@21 -- # val= 00:05:19.748 07:32:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # IFS=: 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # read -r var val 00:05:19.748 07:32:44 -- accel/accel.sh@21 -- # val= 00:05:19.748 07:32:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # IFS=: 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # read -r var val 00:05:19.748 07:32:44 -- accel/accel.sh@21 -- # val= 00:05:19.748 07:32:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # IFS=: 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # read -r var val 00:05:19.748 07:32:44 -- accel/accel.sh@21 -- # val= 00:05:19.748 07:32:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # IFS=: 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # read -r var val 00:05:19.748 07:32:44 -- accel/accel.sh@21 -- # val= 00:05:19.748 07:32:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # IFS=: 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # read -r var val 00:05:19.748 07:32:44 -- accel/accel.sh@21 -- # val= 00:05:19.748 07:32:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # IFS=: 00:05:19.748 07:32:44 -- accel/accel.sh@20 -- # read -r var val 00:05:19.748 07:32:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:19.748 07:32:44 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:19.748 07:32:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.748 00:05:19.748 real 0m2.723s 00:05:19.748 user 0m2.378s 00:05:19.748 sys 0m0.138s 00:05:19.748 07:32:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.748 07:32:44 -- common/autotest_common.sh@10 -- # set +x 00:05:19.748 ************************************ 00:05:19.748 END TEST accel_comp 00:05:19.748 ************************************ 00:05:19.748 07:32:45 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.748 07:32:45 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:19.748 07:32:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.748 07:32:45 -- common/autotest_common.sh@10 -- # set +x 00:05:19.748 ************************************ 00:05:19.748 START TEST accel_decomp 00:05:19.748 ************************************ 00:05:19.749 07:32:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.749 07:32:45 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.749 07:32:45 -- accel/accel.sh@17 -- # local accel_module 00:05:19.749 07:32:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.749 07:32:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:19.749 07:32:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.749 07:32:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.749 07:32:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.749 07:32:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.749 07:32:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.749 07:32:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.749 07:32:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.749 07:32:45 -- accel/accel.sh@42 -- # jq -r . 00:05:19.749 [2024-12-02 07:32:45.062953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.749 [2024-12-02 07:32:45.063044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56846 ] 00:05:19.749 [2024-12-02 07:32:45.195530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.749 [2024-12-02 07:32:45.243455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.130 07:32:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:21.130 00:05:21.130 SPDK Configuration: 00:05:21.130 Core mask: 0x1 00:05:21.130 00:05:21.130 Accel Perf Configuration: 00:05:21.130 Workload Type: decompress 00:05:21.130 Transfer size: 4096 bytes 00:05:21.130 Vector count 1 00:05:21.130 Module: software 00:05:21.130 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:21.130 Queue depth: 32 00:05:21.130 Allocate depth: 32 00:05:21.130 # threads/core: 1 00:05:21.130 Run time: 1 seconds 00:05:21.130 Verify: Yes 00:05:21.130 00:05:21.130 Running for 1 seconds... 00:05:21.130 00:05:21.130 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:21.130 ------------------------------------------------------------------------------------ 00:05:21.130 0,0 84960/s 156 MiB/s 0 0 00:05:21.130 ==================================================================================== 00:05:21.130 Total 84960/s 331 MiB/s 0 0' 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:21.130 07:32:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.130 07:32:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.130 07:32:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.130 07:32:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.130 07:32:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.130 07:32:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.130 07:32:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.130 07:32:46 -- accel/accel.sh@42 -- # jq -r . 00:05:21.130 [2024-12-02 07:32:46.411743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.130 [2024-12-02 07:32:46.411834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56860 ] 00:05:21.130 [2024-12-02 07:32:46.546154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.130 [2024-12-02 07:32:46.595787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=0x1 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=decompress 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=software 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@23 -- # accel_module=software 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=32 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=32 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=1 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val=Yes 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:21.130 07:32:46 -- accel/accel.sh@21 -- # val= 00:05:21.130 07:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # IFS=: 00:05:21.130 07:32:46 -- accel/accel.sh@20 -- # read -r var val 00:05:22.580 07:32:47 -- accel/accel.sh@21 -- # val= 00:05:22.580 07:32:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # IFS=: 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # read -r var val 00:05:22.580 07:32:47 -- accel/accel.sh@21 -- # val= 00:05:22.580 07:32:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # IFS=: 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # read -r var val 00:05:22.580 07:32:47 -- accel/accel.sh@21 -- # val= 00:05:22.580 07:32:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # IFS=: 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # read -r var val 00:05:22.580 07:32:47 -- accel/accel.sh@21 -- # val= 00:05:22.580 07:32:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # IFS=: 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # read -r var val 00:05:22.580 07:32:47 -- accel/accel.sh@21 -- # val= 00:05:22.580 07:32:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # IFS=: 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # read -r var val 00:05:22.580 07:32:47 -- accel/accel.sh@21 -- # val= 00:05:22.580 07:32:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # IFS=: 00:05:22.580 07:32:47 -- accel/accel.sh@20 -- # read -r var val 00:05:22.580 07:32:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:22.580 07:32:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:22.580 07:32:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.580 00:05:22.580 real 0m2.724s 00:05:22.580 user 0m2.366s 00:05:22.580 sys 0m0.155s 00:05:22.580 07:32:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.580 07:32:47 -- common/autotest_common.sh@10 -- # set +x 00:05:22.580 ************************************ 00:05:22.580 END TEST accel_decomp 00:05:22.580 ************************************ 00:05:22.580 07:32:47 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:22.580 07:32:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:22.580 07:32:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.580 07:32:47 -- common/autotest_common.sh@10 -- # set +x 00:05:22.580 ************************************ 00:05:22.580 START TEST accel_decmop_full 00:05:22.580 ************************************ 00:05:22.580 07:32:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:22.580 07:32:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.580 07:32:47 -- accel/accel.sh@17 -- # local accel_module 00:05:22.580 07:32:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:22.580 07:32:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:22.580 07:32:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.580 07:32:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.580 07:32:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.580 07:32:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.580 07:32:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.580 07:32:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.580 07:32:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.580 07:32:47 -- accel/accel.sh@42 -- # jq -r . 00:05:22.580 [2024-12-02 07:32:47.840550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.580 [2024-12-02 07:32:47.840646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56889 ] 00:05:22.581 [2024-12-02 07:32:47.976206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.581 [2024-12-02 07:32:48.025321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.968 07:32:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:23.968 00:05:23.968 SPDK Configuration: 00:05:23.968 Core mask: 0x1 00:05:23.968 00:05:23.968 Accel Perf Configuration: 00:05:23.968 Workload Type: decompress 00:05:23.968 Transfer size: 111250 bytes 00:05:23.968 Vector count 1 00:05:23.968 Module: software 00:05:23.968 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:23.968 Queue depth: 32 00:05:23.968 Allocate depth: 32 00:05:23.968 # threads/core: 1 00:05:23.968 Run time: 1 seconds 00:05:23.968 Verify: Yes 00:05:23.968 00:05:23.968 Running for 1 seconds... 00:05:23.968 00:05:23.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:23.968 ------------------------------------------------------------------------------------ 00:05:23.968 0,0 5600/s 231 MiB/s 0 0 00:05:23.968 ==================================================================================== 00:05:23.968 Total 5600/s 594 MiB/s 0 0' 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.968 07:32:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.968 07:32:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:23.968 07:32:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.968 07:32:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.968 07:32:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.968 07:32:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.968 07:32:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.968 07:32:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.968 07:32:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.968 07:32:49 -- accel/accel.sh@42 -- # jq -r . 00:05:23.968 [2024-12-02 07:32:49.199723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.968 [2024-12-02 07:32:49.199814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56914 ] 00:05:23.968 [2024-12-02 07:32:49.335284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.968 [2024-12-02 07:32:49.384001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.968 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.968 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.968 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.968 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.968 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.968 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.968 07:32:49 -- accel/accel.sh@21 -- # val=0x1 00:05:23.968 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.968 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.968 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val=decompress 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val=software 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val=32 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val=32 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val=1 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val=Yes 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:23.969 07:32:49 -- accel/accel.sh@21 -- # val= 00:05:23.969 07:32:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # IFS=: 00:05:23.969 07:32:49 -- accel/accel.sh@20 -- # read -r var val 00:05:25.345 07:32:50 -- accel/accel.sh@21 -- # val= 00:05:25.345 07:32:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # IFS=: 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # read -r var val 00:05:25.345 07:32:50 -- accel/accel.sh@21 -- # val= 00:05:25.345 07:32:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # IFS=: 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # read -r var val 00:05:25.345 07:32:50 -- accel/accel.sh@21 -- # val= 00:05:25.345 07:32:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # IFS=: 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # read -r var val 00:05:25.345 07:32:50 -- accel/accel.sh@21 -- # val= 00:05:25.345 07:32:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # IFS=: 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # read -r var val 00:05:25.345 07:32:50 -- accel/accel.sh@21 -- # val= 00:05:25.345 07:32:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # IFS=: 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # read -r var val 00:05:25.345 07:32:50 -- accel/accel.sh@21 -- # val= 00:05:25.345 07:32:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # IFS=: 00:05:25.345 07:32:50 -- accel/accel.sh@20 -- # read -r var val 00:05:25.345 07:32:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:25.345 07:32:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:25.345 07:32:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.345 00:05:25.345 real 0m2.732s 00:05:25.345 user 0m2.388s 00:05:25.345 sys 0m0.141s 00:05:25.345 07:32:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.345 07:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:25.345 ************************************ 00:05:25.345 END TEST accel_decmop_full 00:05:25.345 ************************************ 00:05:25.345 07:32:50 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:25.345 07:32:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:25.345 07:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.345 07:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:25.345 ************************************ 00:05:25.345 START TEST accel_decomp_mcore 00:05:25.345 ************************************ 00:05:25.345 07:32:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:25.345 07:32:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.345 07:32:50 -- accel/accel.sh@17 -- # local accel_module 00:05:25.345 07:32:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:25.345 07:32:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:25.345 07:32:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.345 07:32:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.345 07:32:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.345 07:32:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.345 07:32:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.345 07:32:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.345 07:32:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.345 07:32:50 -- accel/accel.sh@42 -- # jq -r . 00:05:25.345 [2024-12-02 07:32:50.622110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.345 [2024-12-02 07:32:50.622213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56943 ] 00:05:25.345 [2024-12-02 07:32:50.755631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.345 [2024-12-02 07:32:50.807890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.345 [2024-12-02 07:32:50.808048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.345 [2024-12-02 07:32:50.808166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.345 [2024-12-02 07:32:50.808406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.724 07:32:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:26.724 00:05:26.724 SPDK Configuration: 00:05:26.724 Core mask: 0xf 00:05:26.724 00:05:26.724 Accel Perf Configuration: 00:05:26.724 Workload Type: decompress 00:05:26.724 Transfer size: 4096 bytes 00:05:26.724 Vector count 1 00:05:26.724 Module: software 00:05:26.724 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:26.724 Queue depth: 32 00:05:26.724 Allocate depth: 32 00:05:26.724 # threads/core: 1 00:05:26.724 Run time: 1 seconds 00:05:26.724 Verify: Yes 00:05:26.724 00:05:26.724 Running for 1 seconds... 00:05:26.724 00:05:26.724 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:26.724 ------------------------------------------------------------------------------------ 00:05:26.724 0,0 67712/s 124 MiB/s 0 0 00:05:26.724 3,0 64576/s 118 MiB/s 0 0 00:05:26.724 2,0 62880/s 115 MiB/s 0 0 00:05:26.724 1,0 64224/s 118 MiB/s 0 0 00:05:26.724 ==================================================================================== 00:05:26.724 Total 259392/s 1013 MiB/s 0 0' 00:05:26.724 07:32:51 -- accel/accel.sh@20 -- # IFS=: 00:05:26.724 07:32:51 -- accel/accel.sh@20 -- # read -r var val 00:05:26.724 07:32:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:26.724 07:32:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:26.724 07:32:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.725 07:32:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.725 07:32:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.725 07:32:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.725 07:32:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.725 07:32:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.725 07:32:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.725 07:32:51 -- accel/accel.sh@42 -- # jq -r . 00:05:26.725 [2024-12-02 07:32:51.983928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.725 [2024-12-02 07:32:51.984007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56960 ] 00:05:26.725 [2024-12-02 07:32:52.110137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.725 [2024-12-02 07:32:52.167117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.725 [2024-12-02 07:32:52.167211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.725 [2024-12-02 07:32:52.167325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.725 [2024-12-02 07:32:52.167328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=0xf 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=decompress 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=software 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@23 -- # accel_module=software 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=32 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=32 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=1 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val=Yes 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:26.725 07:32:52 -- accel/accel.sh@21 -- # val= 00:05:26.725 07:32:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # IFS=: 00:05:26.725 07:32:52 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@21 -- # val= 00:05:28.102 07:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # IFS=: 00:05:28.102 07:32:53 -- accel/accel.sh@20 -- # read -r var val 00:05:28.102 07:32:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.102 07:32:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:28.102 07:32:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.102 00:05:28.102 real 0m2.734s 00:05:28.102 user 0m8.785s 00:05:28.102 sys 0m0.159s 00:05:28.102 ************************************ 00:05:28.102 END TEST accel_decomp_mcore 00:05:28.102 ************************************ 00:05:28.102 07:32:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.102 07:32:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.102 07:32:53 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.102 07:32:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:28.102 07:32:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.102 07:32:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.102 ************************************ 00:05:28.102 START TEST accel_decomp_full_mcore 00:05:28.102 ************************************ 00:05:28.102 07:32:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.102 07:32:53 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.102 07:32:53 -- accel/accel.sh@17 -- # local accel_module 00:05:28.102 07:32:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.102 07:32:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:28.102 07:32:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.102 07:32:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.102 07:32:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.102 07:32:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.102 07:32:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.102 07:32:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.102 07:32:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.102 07:32:53 -- accel/accel.sh@42 -- # jq -r . 00:05:28.102 [2024-12-02 07:32:53.400948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.102 [2024-12-02 07:32:53.401178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57003 ] 00:05:28.102 [2024-12-02 07:32:53.531670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.102 [2024-12-02 07:32:53.580013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.102 [2024-12-02 07:32:53.580119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.102 [2024-12-02 07:32:53.580242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.102 [2024-12-02 07:32:53.580245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.478 07:32:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:29.478 00:05:29.478 SPDK Configuration: 00:05:29.478 Core mask: 0xf 00:05:29.478 00:05:29.478 Accel Perf Configuration: 00:05:29.478 Workload Type: decompress 00:05:29.478 Transfer size: 111250 bytes 00:05:29.478 Vector count 1 00:05:29.478 Module: software 00:05:29.478 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:29.478 Queue depth: 32 00:05:29.478 Allocate depth: 32 00:05:29.478 # threads/core: 1 00:05:29.478 Run time: 1 seconds 00:05:29.478 Verify: Yes 00:05:29.478 00:05:29.478 Running for 1 seconds... 00:05:29.478 00:05:29.478 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.478 ------------------------------------------------------------------------------------ 00:05:29.478 0,0 5120/s 211 MiB/s 0 0 00:05:29.478 3,0 5152/s 212 MiB/s 0 0 00:05:29.478 2,0 5088/s 210 MiB/s 0 0 00:05:29.478 1,0 5120/s 211 MiB/s 0 0 00:05:29.478 ==================================================================================== 00:05:29.478 Total 20480/s 2172 MiB/s 0 0' 00:05:29.478 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.478 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.478 07:32:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:29.478 07:32:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:29.479 07:32:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.479 07:32:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.479 07:32:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.479 07:32:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.479 07:32:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.479 07:32:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.479 07:32:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.479 07:32:54 -- accel/accel.sh@42 -- # jq -r . 00:05:29.479 [2024-12-02 07:32:54.768125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.479 [2024-12-02 07:32:54.768215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57020 ] 00:05:29.479 [2024-12-02 07:32:54.902564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.479 [2024-12-02 07:32:54.954212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.479 [2024-12-02 07:32:54.954336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.479 [2024-12-02 07:32:54.954493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.479 [2024-12-02 07:32:54.954499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val=0xf 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val=decompress 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:54 -- accel/accel.sh@21 -- # val=software 00:05:29.479 07:32:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:54 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:54 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val=32 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val=32 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val=1 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val=Yes 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:29.479 07:32:55 -- accel/accel.sh@21 -- # val= 00:05:29.479 07:32:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # IFS=: 00:05:29.479 07:32:55 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@21 -- # val= 00:05:30.865 07:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # IFS=: 00:05:30.865 07:32:56 -- accel/accel.sh@20 -- # read -r var val 00:05:30.865 07:32:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:30.865 07:32:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:30.865 07:32:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.865 00:05:30.865 real 0m2.744s 00:05:30.865 user 0m8.842s 00:05:30.865 sys 0m0.164s 00:05:30.865 07:32:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.865 07:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.865 ************************************ 00:05:30.865 END TEST accel_decomp_full_mcore 00:05:30.865 ************************************ 00:05:30.865 07:32:56 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.865 07:32:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:30.865 07:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.865 07:32:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.865 ************************************ 00:05:30.865 START TEST accel_decomp_mthread 00:05:30.865 ************************************ 00:05:30.865 07:32:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.865 07:32:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.865 07:32:56 -- accel/accel.sh@17 -- # local accel_module 00:05:30.865 07:32:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.865 07:32:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.865 07:32:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.865 07:32:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.865 07:32:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.865 07:32:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.865 07:32:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.865 07:32:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.865 07:32:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.865 07:32:56 -- accel/accel.sh@42 -- # jq -r . 00:05:30.865 [2024-12-02 07:32:56.187744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.865 [2024-12-02 07:32:56.187977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57058 ] 00:05:30.865 [2024-12-02 07:32:56.323542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.865 [2024-12-02 07:32:56.379943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.243 07:32:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:32.243 00:05:32.243 SPDK Configuration: 00:05:32.243 Core mask: 0x1 00:05:32.243 00:05:32.243 Accel Perf Configuration: 00:05:32.243 Workload Type: decompress 00:05:32.243 Transfer size: 4096 bytes 00:05:32.243 Vector count 1 00:05:32.243 Module: software 00:05:32.243 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:32.243 Queue depth: 32 00:05:32.243 Allocate depth: 32 00:05:32.243 # threads/core: 2 00:05:32.243 Run time: 1 seconds 00:05:32.243 Verify: Yes 00:05:32.243 00:05:32.243 Running for 1 seconds... 00:05:32.243 00:05:32.243 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:32.243 ------------------------------------------------------------------------------------ 00:05:32.243 0,1 42496/s 78 MiB/s 0 0 00:05:32.243 0,0 42400/s 78 MiB/s 0 0 00:05:32.243 ==================================================================================== 00:05:32.243 Total 84896/s 331 MiB/s 0 0' 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:32.243 07:32:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.243 07:32:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.243 07:32:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.243 07:32:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.243 07:32:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.243 07:32:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.243 07:32:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.243 07:32:57 -- accel/accel.sh@42 -- # jq -r . 00:05:32.243 [2024-12-02 07:32:57.546169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.243 [2024-12-02 07:32:57.546250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57077 ] 00:05:32.243 [2024-12-02 07:32:57.672396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.243 [2024-12-02 07:32:57.717398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=0x1 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=decompress 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=software 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=32 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=32 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=2 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val=Yes 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:32.243 07:32:57 -- accel/accel.sh@21 -- # val= 00:05:32.243 07:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # IFS=: 00:05:32.243 07:32:57 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@21 -- # val= 00:05:33.620 07:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # IFS=: 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@21 -- # val= 00:05:33.620 07:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # IFS=: 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@21 -- # val= 00:05:33.620 07:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # IFS=: 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@21 -- # val= 00:05:33.620 07:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # IFS=: 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@21 -- # val= 00:05:33.620 07:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # IFS=: 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@21 -- # val= 00:05:33.620 07:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # IFS=: 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@21 -- # val= 00:05:33.620 07:32:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # IFS=: 00:05:33.620 07:32:58 -- accel/accel.sh@20 -- # read -r var val 00:05:33.620 07:32:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:33.620 07:32:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:33.620 07:32:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.620 ************************************ 00:05:33.620 END TEST accel_decomp_mthread 00:05:33.620 ************************************ 00:05:33.620 00:05:33.620 real 0m2.704s 00:05:33.620 user 0m2.372s 00:05:33.620 sys 0m0.126s 00:05:33.620 07:32:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.620 07:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.620 07:32:58 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.620 07:32:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:33.620 07:32:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.620 07:32:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.620 ************************************ 00:05:33.620 START TEST accel_deomp_full_mthread 00:05:33.620 ************************************ 00:05:33.620 07:32:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.620 07:32:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.620 07:32:58 -- accel/accel.sh@17 -- # local accel_module 00:05:33.620 07:32:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.620 07:32:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:33.620 07:32:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.620 07:32:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.620 07:32:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.620 07:32:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.620 07:32:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.620 07:32:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.620 07:32:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.620 07:32:58 -- accel/accel.sh@42 -- # jq -r . 00:05:33.620 [2024-12-02 07:32:58.943308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.620 [2024-12-02 07:32:58.943421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57106 ] 00:05:33.620 [2024-12-02 07:32:59.069356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.620 [2024-12-02 07:32:59.117650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.997 07:33:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:34.997 00:05:34.997 SPDK Configuration: 00:05:34.997 Core mask: 0x1 00:05:34.997 00:05:34.997 Accel Perf Configuration: 00:05:34.997 Workload Type: decompress 00:05:34.997 Transfer size: 111250 bytes 00:05:34.997 Vector count 1 00:05:34.997 Module: software 00:05:34.997 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:34.997 Queue depth: 32 00:05:34.997 Allocate depth: 32 00:05:34.997 # threads/core: 2 00:05:34.997 Run time: 1 seconds 00:05:34.997 Verify: Yes 00:05:34.997 00:05:34.997 Running for 1 seconds... 00:05:34.997 00:05:34.997 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:34.997 ------------------------------------------------------------------------------------ 00:05:34.997 0,1 2848/s 117 MiB/s 0 0 00:05:34.997 0,0 2784/s 115 MiB/s 0 0 00:05:34.997 ==================================================================================== 00:05:34.997 Total 5632/s 597 MiB/s 0 0' 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:34.997 07:33:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.997 07:33:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.997 07:33:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.997 07:33:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.997 07:33:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.997 07:33:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.997 07:33:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.997 07:33:00 -- accel/accel.sh@42 -- # jq -r . 00:05:34.997 [2024-12-02 07:33:00.318974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.997 [2024-12-02 07:33:00.319061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57126 ] 00:05:34.997 [2024-12-02 07:33:00.456660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.997 [2024-12-02 07:33:00.512711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=0x1 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=decompress 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=software 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@23 -- # accel_module=software 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=32 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=32 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=2 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val=Yes 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:34.997 07:33:00 -- accel/accel.sh@21 -- # val= 00:05:34.997 07:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # IFS=: 00:05:34.997 07:33:00 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@21 -- # val= 00:05:36.375 07:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@21 -- # val= 00:05:36.375 07:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@21 -- # val= 00:05:36.375 07:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@21 -- # val= 00:05:36.375 07:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@21 -- # val= 00:05:36.375 07:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@21 -- # val= 00:05:36.375 07:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@21 -- # val= 00:05:36.375 07:33:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # IFS=: 00:05:36.375 07:33:01 -- accel/accel.sh@20 -- # read -r var val 00:05:36.375 07:33:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:36.375 07:33:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:36.375 07:33:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.375 00:05:36.375 real 0m2.786s 00:05:36.375 user 0m2.428s 00:05:36.375 sys 0m0.152s 00:05:36.375 07:33:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.375 ************************************ 00:05:36.375 END TEST accel_deomp_full_mthread 00:05:36.375 ************************************ 00:05:36.375 07:33:01 -- common/autotest_common.sh@10 -- # set +x 00:05:36.375 07:33:01 -- accel/accel.sh@116 -- # [[ n == y ]] 00:05:36.375 07:33:01 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:36.375 07:33:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:36.375 07:33:01 -- accel/accel.sh@129 -- # build_accel_config 00:05:36.375 07:33:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.375 07:33:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.375 07:33:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.375 07:33:01 -- common/autotest_common.sh@10 -- # set +x 00:05:36.375 07:33:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.375 07:33:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.375 07:33:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.375 07:33:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.375 07:33:01 -- accel/accel.sh@42 -- # jq -r . 00:05:36.375 ************************************ 00:05:36.375 START TEST accel_dif_functional_tests 00:05:36.375 ************************************ 00:05:36.375 07:33:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:36.375 [2024-12-02 07:33:01.813107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.375 [2024-12-02 07:33:01.813876] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57163 ] 00:05:36.375 [2024-12-02 07:33:01.953676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.634 [2024-12-02 07:33:02.015666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.635 [2024-12-02 07:33:02.015837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.635 [2024-12-02 07:33:02.015840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.635 00:05:36.635 00:05:36.635 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.635 http://cunit.sourceforge.net/ 00:05:36.635 00:05:36.635 00:05:36.635 Suite: accel_dif 00:05:36.635 Test: verify: DIF generated, GUARD check ...passed 00:05:36.635 Test: verify: DIF generated, APPTAG check ...passed 00:05:36.635 Test: verify: DIF generated, REFTAG check ...passed 00:05:36.635 Test: verify: DIF not generated, GUARD check ...passed 00:05:36.635 Test: verify: DIF not generated, APPTAG check ...passed 00:05:36.635 Test: verify: DIF not generated, REFTAG check ...[2024-12-02 07:33:02.071076] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:36.635 [2024-12-02 07:33:02.071162] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:36.635 [2024-12-02 07:33:02.071228] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:36.635 [2024-12-02 07:33:02.071256] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:36.635 passed 00:05:36.635 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:36.635 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:36.635 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:36.635 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:36.635 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:36.635 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-02 07:33:02.071279] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:36.635 [2024-12-02 07:33:02.071318] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:36.635 [2024-12-02 07:33:02.071376] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:36.635 passed 00:05:36.635 Test: generate copy: DIF generated, GUARD check ...passed 00:05:36.635 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:36.635 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:36.635 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-12-02 07:33:02.071515] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:36.635 passed 00:05:36.635 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:36.635 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:36.635 Test: generate copy: iovecs-len validate ...[2024-12-02 07:33:02.071760] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:36.635 passed 00:05:36.635 Test: generate copy: buffer alignment validate ...passed 00:05:36.635 00:05:36.635 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.635 suites 1 1 n/a 0 0 00:05:36.635 tests 20 20 20 0 0 00:05:36.635 asserts 204 204 204 0 n/a 00:05:36.635 00:05:36.635 Elapsed time = 0.002 seconds 00:05:36.635 00:05:36.635 real 0m0.471s 00:05:36.635 user 0m0.526s 00:05:36.635 sys 0m0.116s 00:05:36.635 07:33:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.635 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:36.635 ************************************ 00:05:36.635 END TEST accel_dif_functional_tests 00:05:36.635 ************************************ 00:05:36.894 00:05:36.894 real 0m58.393s 00:05:36.894 user 1m3.641s 00:05:36.894 sys 0m4.162s 00:05:36.894 07:33:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.894 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:36.894 ************************************ 00:05:36.894 END TEST accel 00:05:36.894 ************************************ 00:05:36.894 07:33:02 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:36.894 07:33:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.894 07:33:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.894 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:36.894 ************************************ 00:05:36.894 START TEST accel_rpc 00:05:36.894 ************************************ 00:05:36.894 07:33:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:36.894 * Looking for test storage... 00:05:36.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:36.894 07:33:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.894 07:33:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.894 07:33:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.894 07:33:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.894 07:33:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.894 07:33:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.894 07:33:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.894 07:33:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.894 07:33:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.894 07:33:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.894 07:33:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.894 07:33:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.894 07:33:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.894 07:33:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.894 07:33:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.894 07:33:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.894 07:33:02 -- scripts/common.sh@344 -- # : 1 00:05:36.894 07:33:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.894 07:33:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.894 07:33:02 -- scripts/common.sh@364 -- # decimal 1 00:05:36.894 07:33:02 -- scripts/common.sh@352 -- # local d=1 00:05:36.894 07:33:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.894 07:33:02 -- scripts/common.sh@354 -- # echo 1 00:05:36.894 07:33:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.894 07:33:02 -- scripts/common.sh@365 -- # decimal 2 00:05:36.894 07:33:02 -- scripts/common.sh@352 -- # local d=2 00:05:36.894 07:33:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.894 07:33:02 -- scripts/common.sh@354 -- # echo 2 00:05:36.894 07:33:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.894 07:33:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.894 07:33:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.894 07:33:02 -- scripts/common.sh@367 -- # return 0 00:05:36.894 07:33:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.894 07:33:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.894 --rc genhtml_branch_coverage=1 00:05:36.894 --rc genhtml_function_coverage=1 00:05:36.894 --rc genhtml_legend=1 00:05:36.894 --rc geninfo_all_blocks=1 00:05:36.894 --rc geninfo_unexecuted_blocks=1 00:05:36.894 00:05:36.894 ' 00:05:36.894 07:33:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.894 --rc genhtml_branch_coverage=1 00:05:36.894 --rc genhtml_function_coverage=1 00:05:36.894 --rc genhtml_legend=1 00:05:36.894 --rc geninfo_all_blocks=1 00:05:36.894 --rc geninfo_unexecuted_blocks=1 00:05:36.894 00:05:36.894 ' 00:05:36.894 07:33:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.894 --rc genhtml_branch_coverage=1 00:05:36.894 --rc genhtml_function_coverage=1 00:05:36.894 --rc genhtml_legend=1 00:05:36.894 --rc geninfo_all_blocks=1 00:05:36.894 --rc geninfo_unexecuted_blocks=1 00:05:36.894 00:05:36.894 ' 00:05:36.894 07:33:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.894 --rc genhtml_branch_coverage=1 00:05:36.894 --rc genhtml_function_coverage=1 00:05:36.894 --rc genhtml_legend=1 00:05:36.894 --rc geninfo_all_blocks=1 00:05:36.894 --rc geninfo_unexecuted_blocks=1 00:05:36.894 00:05:36.894 ' 00:05:36.894 07:33:02 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.894 07:33:02 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57235 00:05:36.894 07:33:02 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:36.894 07:33:02 -- accel/accel_rpc.sh@15 -- # waitforlisten 57235 00:05:36.895 07:33:02 -- common/autotest_common.sh@829 -- # '[' -z 57235 ']' 00:05:36.895 07:33:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.895 07:33:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.895 07:33:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.895 07:33:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.895 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.153 [2024-12-02 07:33:02.564254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.153 [2024-12-02 07:33:02.564381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57235 ] 00:05:37.153 [2024-12-02 07:33:02.699990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.153 [2024-12-02 07:33:02.750870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.153 [2024-12-02 07:33:02.751054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.413 07:33:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.413 07:33:02 -- common/autotest_common.sh@862 -- # return 0 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:37.413 07:33:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.413 07:33:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.413 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.413 ************************************ 00:05:37.413 START TEST accel_assign_opcode 00:05:37.413 ************************************ 00:05:37.413 07:33:02 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:37.413 07:33:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.413 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.413 [2024-12-02 07:33:02.823455] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:37.413 07:33:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:37.413 07:33:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.413 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.413 [2024-12-02 07:33:02.831453] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:37.413 07:33:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:37.413 07:33:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.413 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.413 07:33:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:37.413 07:33:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.413 07:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.413 07:33:02 -- accel/accel_rpc.sh@42 -- # grep software 00:05:37.413 07:33:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.413 software 00:05:37.413 00:05:37.413 real 0m0.188s 00:05:37.413 user 0m0.057s 00:05:37.413 sys 0m0.010s 00:05:37.413 07:33:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.413 07:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:37.413 ************************************ 00:05:37.413 END TEST accel_assign_opcode 00:05:37.413 ************************************ 00:05:37.672 07:33:03 -- accel/accel_rpc.sh@55 -- # killprocess 57235 00:05:37.672 07:33:03 -- common/autotest_common.sh@936 -- # '[' -z 57235 ']' 00:05:37.672 07:33:03 -- common/autotest_common.sh@940 -- # kill -0 57235 00:05:37.672 07:33:03 -- common/autotest_common.sh@941 -- # uname 00:05:37.672 07:33:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.672 07:33:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57235 00:05:37.672 07:33:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.672 killing process with pid 57235 00:05:37.672 07:33:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.672 07:33:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57235' 00:05:37.672 07:33:03 -- common/autotest_common.sh@955 -- # kill 57235 00:05:37.672 07:33:03 -- common/autotest_common.sh@960 -- # wait 57235 00:05:37.932 00:05:37.932 real 0m1.015s 00:05:37.932 user 0m1.049s 00:05:37.932 sys 0m0.307s 00:05:37.932 07:33:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.932 07:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:37.932 ************************************ 00:05:37.932 END TEST accel_rpc 00:05:37.932 ************************************ 00:05:37.932 07:33:03 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.932 07:33:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.932 07:33:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.932 07:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:37.932 ************************************ 00:05:37.932 START TEST app_cmdline 00:05:37.932 ************************************ 00:05:37.932 07:33:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.932 * Looking for test storage... 00:05:37.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.932 07:33:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:37.932 07:33:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:37.932 07:33:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.932 07:33:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.932 07:33:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.932 07:33:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.932 07:33:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.932 07:33:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.932 07:33:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.932 07:33:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.932 07:33:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.932 07:33:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.932 07:33:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.932 07:33:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.932 07:33:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.932 07:33:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.932 07:33:03 -- scripts/common.sh@344 -- # : 1 00:05:37.932 07:33:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.932 07:33:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.932 07:33:03 -- scripts/common.sh@364 -- # decimal 1 00:05:37.932 07:33:03 -- scripts/common.sh@352 -- # local d=1 00:05:37.932 07:33:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.932 07:33:03 -- scripts/common.sh@354 -- # echo 1 00:05:37.932 07:33:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.932 07:33:03 -- scripts/common.sh@365 -- # decimal 2 00:05:37.932 07:33:03 -- scripts/common.sh@352 -- # local d=2 00:05:37.932 07:33:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.932 07:33:03 -- scripts/common.sh@354 -- # echo 2 00:05:37.932 07:33:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.932 07:33:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.932 07:33:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.932 07:33:03 -- scripts/common.sh@367 -- # return 0 00:05:37.932 07:33:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.932 07:33:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.932 --rc genhtml_branch_coverage=1 00:05:37.932 --rc genhtml_function_coverage=1 00:05:37.932 --rc genhtml_legend=1 00:05:37.932 --rc geninfo_all_blocks=1 00:05:37.932 --rc geninfo_unexecuted_blocks=1 00:05:37.932 00:05:37.932 ' 00:05:37.932 07:33:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.932 --rc genhtml_branch_coverage=1 00:05:37.932 --rc genhtml_function_coverage=1 00:05:37.932 --rc genhtml_legend=1 00:05:37.932 --rc geninfo_all_blocks=1 00:05:37.932 --rc geninfo_unexecuted_blocks=1 00:05:37.932 00:05:37.932 ' 00:05:37.932 07:33:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.932 --rc genhtml_branch_coverage=1 00:05:37.932 --rc genhtml_function_coverage=1 00:05:37.932 --rc genhtml_legend=1 00:05:37.932 --rc geninfo_all_blocks=1 00:05:37.932 --rc geninfo_unexecuted_blocks=1 00:05:37.932 00:05:37.932 ' 00:05:37.932 07:33:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.932 --rc genhtml_branch_coverage=1 00:05:37.932 --rc genhtml_function_coverage=1 00:05:37.932 --rc genhtml_legend=1 00:05:37.932 --rc geninfo_all_blocks=1 00:05:37.932 --rc geninfo_unexecuted_blocks=1 00:05:37.932 00:05:37.932 ' 00:05:37.932 07:33:03 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:37.932 07:33:03 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57322 00:05:37.932 07:33:03 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:37.932 07:33:03 -- app/cmdline.sh@18 -- # waitforlisten 57322 00:05:37.932 07:33:03 -- common/autotest_common.sh@829 -- # '[' -z 57322 ']' 00:05:37.932 07:33:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.932 07:33:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.932 07:33:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.932 07:33:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.932 07:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:38.192 [2024-12-02 07:33:03.607815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.192 [2024-12-02 07:33:03.607933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57322 ] 00:05:38.192 [2024-12-02 07:33:03.741097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.192 [2024-12-02 07:33:03.788073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.192 [2024-12-02 07:33:03.788245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.127 07:33:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.127 07:33:04 -- common/autotest_common.sh@862 -- # return 0 00:05:39.127 07:33:04 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:39.386 { 00:05:39.386 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:05:39.386 "fields": { 00:05:39.386 "major": 24, 00:05:39.386 "minor": 1, 00:05:39.386 "patch": 1, 00:05:39.386 "suffix": "-pre", 00:05:39.386 "commit": "c13c99a5e" 00:05:39.386 } 00:05:39.386 } 00:05:39.386 07:33:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:39.386 07:33:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:39.386 07:33:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:39.386 07:33:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:39.386 07:33:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:39.386 07:33:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:39.386 07:33:04 -- app/cmdline.sh@26 -- # sort 00:05:39.386 07:33:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.386 07:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:39.386 07:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.386 07:33:04 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:39.386 07:33:04 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:39.386 07:33:04 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.386 07:33:04 -- common/autotest_common.sh@650 -- # local es=0 00:05:39.386 07:33:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.386 07:33:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.386 07:33:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.386 07:33:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.386 07:33:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.386 07:33:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.386 07:33:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.386 07:33:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.386 07:33:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:39.386 07:33:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.649 request: 00:05:39.649 { 00:05:39.649 "method": "env_dpdk_get_mem_stats", 00:05:39.649 "req_id": 1 00:05:39.649 } 00:05:39.649 Got JSON-RPC error response 00:05:39.649 response: 00:05:39.649 { 00:05:39.649 "code": -32601, 00:05:39.649 "message": "Method not found" 00:05:39.649 } 00:05:39.649 07:33:05 -- common/autotest_common.sh@653 -- # es=1 00:05:39.649 07:33:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.649 07:33:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.649 07:33:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.649 07:33:05 -- app/cmdline.sh@1 -- # killprocess 57322 00:05:39.649 07:33:05 -- common/autotest_common.sh@936 -- # '[' -z 57322 ']' 00:05:39.649 07:33:05 -- common/autotest_common.sh@940 -- # kill -0 57322 00:05:39.649 07:33:05 -- common/autotest_common.sh@941 -- # uname 00:05:39.649 07:33:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.649 07:33:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57322 00:05:39.649 07:33:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.649 07:33:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.649 killing process with pid 57322 00:05:39.649 07:33:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57322' 00:05:39.649 07:33:05 -- common/autotest_common.sh@955 -- # kill 57322 00:05:39.649 07:33:05 -- common/autotest_common.sh@960 -- # wait 57322 00:05:39.907 00:05:39.907 real 0m2.000s 00:05:39.907 user 0m2.607s 00:05:39.907 sys 0m0.360s 00:05:39.907 07:33:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.907 07:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.907 ************************************ 00:05:39.907 END TEST app_cmdline 00:05:39.907 ************************************ 00:05:39.907 07:33:05 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:39.907 07:33:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.907 07:33:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.907 07:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.907 ************************************ 00:05:39.907 START TEST version 00:05:39.907 ************************************ 00:05:39.907 07:33:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:39.907 * Looking for test storage... 00:05:39.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:39.907 07:33:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.907 07:33:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.907 07:33:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.166 07:33:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.166 07:33:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.166 07:33:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.166 07:33:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.166 07:33:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.166 07:33:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.166 07:33:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.166 07:33:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.166 07:33:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.166 07:33:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.166 07:33:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.166 07:33:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.166 07:33:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.166 07:33:05 -- scripts/common.sh@344 -- # : 1 00:05:40.166 07:33:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.166 07:33:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.166 07:33:05 -- scripts/common.sh@364 -- # decimal 1 00:05:40.166 07:33:05 -- scripts/common.sh@352 -- # local d=1 00:05:40.166 07:33:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.166 07:33:05 -- scripts/common.sh@354 -- # echo 1 00:05:40.166 07:33:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.166 07:33:05 -- scripts/common.sh@365 -- # decimal 2 00:05:40.166 07:33:05 -- scripts/common.sh@352 -- # local d=2 00:05:40.166 07:33:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.166 07:33:05 -- scripts/common.sh@354 -- # echo 2 00:05:40.166 07:33:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.166 07:33:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.166 07:33:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.166 07:33:05 -- scripts/common.sh@367 -- # return 0 00:05:40.166 07:33:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.166 07:33:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.166 --rc genhtml_branch_coverage=1 00:05:40.166 --rc genhtml_function_coverage=1 00:05:40.166 --rc genhtml_legend=1 00:05:40.166 --rc geninfo_all_blocks=1 00:05:40.166 --rc geninfo_unexecuted_blocks=1 00:05:40.166 00:05:40.166 ' 00:05:40.167 07:33:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.167 --rc genhtml_branch_coverage=1 00:05:40.167 --rc genhtml_function_coverage=1 00:05:40.167 --rc genhtml_legend=1 00:05:40.167 --rc geninfo_all_blocks=1 00:05:40.167 --rc geninfo_unexecuted_blocks=1 00:05:40.167 00:05:40.167 ' 00:05:40.167 07:33:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.167 --rc genhtml_branch_coverage=1 00:05:40.167 --rc genhtml_function_coverage=1 00:05:40.167 --rc genhtml_legend=1 00:05:40.167 --rc geninfo_all_blocks=1 00:05:40.167 --rc geninfo_unexecuted_blocks=1 00:05:40.167 00:05:40.167 ' 00:05:40.167 07:33:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.167 --rc genhtml_branch_coverage=1 00:05:40.167 --rc genhtml_function_coverage=1 00:05:40.167 --rc genhtml_legend=1 00:05:40.167 --rc geninfo_all_blocks=1 00:05:40.167 --rc geninfo_unexecuted_blocks=1 00:05:40.167 00:05:40.167 ' 00:05:40.167 07:33:05 -- app/version.sh@17 -- # get_header_version major 00:05:40.167 07:33:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.167 07:33:05 -- app/version.sh@14 -- # cut -f2 00:05:40.167 07:33:05 -- app/version.sh@14 -- # tr -d '"' 00:05:40.167 07:33:05 -- app/version.sh@17 -- # major=24 00:05:40.167 07:33:05 -- app/version.sh@18 -- # get_header_version minor 00:05:40.167 07:33:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.167 07:33:05 -- app/version.sh@14 -- # cut -f2 00:05:40.167 07:33:05 -- app/version.sh@14 -- # tr -d '"' 00:05:40.167 07:33:05 -- app/version.sh@18 -- # minor=1 00:05:40.167 07:33:05 -- app/version.sh@19 -- # get_header_version patch 00:05:40.167 07:33:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.167 07:33:05 -- app/version.sh@14 -- # cut -f2 00:05:40.167 07:33:05 -- app/version.sh@14 -- # tr -d '"' 00:05:40.167 07:33:05 -- app/version.sh@19 -- # patch=1 00:05:40.167 07:33:05 -- app/version.sh@20 -- # get_header_version suffix 00:05:40.167 07:33:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.167 07:33:05 -- app/version.sh@14 -- # cut -f2 00:05:40.167 07:33:05 -- app/version.sh@14 -- # tr -d '"' 00:05:40.167 07:33:05 -- app/version.sh@20 -- # suffix=-pre 00:05:40.167 07:33:05 -- app/version.sh@22 -- # version=24.1 00:05:40.167 07:33:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:40.167 07:33:05 -- app/version.sh@25 -- # version=24.1.1 00:05:40.167 07:33:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:05:40.167 07:33:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:40.167 07:33:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:40.167 07:33:05 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:05:40.167 07:33:05 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:05:40.167 00:05:40.167 real 0m0.251s 00:05:40.167 user 0m0.171s 00:05:40.167 sys 0m0.118s 00:05:40.167 ************************************ 00:05:40.167 END TEST version 00:05:40.167 ************************************ 00:05:40.167 07:33:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.167 07:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:40.167 07:33:05 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:05:40.167 07:33:05 -- spdk/autotest.sh@191 -- # uname -s 00:05:40.167 07:33:05 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:05:40.167 07:33:05 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:05:40.167 07:33:05 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:05:40.167 07:33:05 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:05:40.167 07:33:05 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:40.167 07:33:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.167 07:33:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.167 07:33:05 -- common/autotest_common.sh@10 -- # set +x 00:05:40.167 ************************************ 00:05:40.167 START TEST spdk_dd 00:05:40.167 ************************************ 00:05:40.167 07:33:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:40.425 * Looking for test storage... 00:05:40.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:40.425 07:33:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.426 07:33:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.426 07:33:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:40.426 07:33:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:40.426 07:33:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:40.426 07:33:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:40.426 07:33:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:40.426 07:33:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:40.426 07:33:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:40.426 07:33:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.426 07:33:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:40.426 07:33:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:40.426 07:33:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:40.426 07:33:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:40.426 07:33:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:40.426 07:33:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:40.426 07:33:05 -- scripts/common.sh@344 -- # : 1 00:05:40.426 07:33:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:40.426 07:33:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.426 07:33:05 -- scripts/common.sh@364 -- # decimal 1 00:05:40.426 07:33:05 -- scripts/common.sh@352 -- # local d=1 00:05:40.426 07:33:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.426 07:33:05 -- scripts/common.sh@354 -- # echo 1 00:05:40.426 07:33:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:40.426 07:33:05 -- scripts/common.sh@365 -- # decimal 2 00:05:40.426 07:33:05 -- scripts/common.sh@352 -- # local d=2 00:05:40.426 07:33:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.426 07:33:05 -- scripts/common.sh@354 -- # echo 2 00:05:40.426 07:33:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:40.426 07:33:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:40.426 07:33:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:40.426 07:33:05 -- scripts/common.sh@367 -- # return 0 00:05:40.426 07:33:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.426 07:33:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:40.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.426 --rc genhtml_branch_coverage=1 00:05:40.426 --rc genhtml_function_coverage=1 00:05:40.426 --rc genhtml_legend=1 00:05:40.426 --rc geninfo_all_blocks=1 00:05:40.426 --rc geninfo_unexecuted_blocks=1 00:05:40.426 00:05:40.426 ' 00:05:40.426 07:33:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:40.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.426 --rc genhtml_branch_coverage=1 00:05:40.426 --rc genhtml_function_coverage=1 00:05:40.426 --rc genhtml_legend=1 00:05:40.426 --rc geninfo_all_blocks=1 00:05:40.426 --rc geninfo_unexecuted_blocks=1 00:05:40.426 00:05:40.426 ' 00:05:40.426 07:33:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:40.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.426 --rc genhtml_branch_coverage=1 00:05:40.426 --rc genhtml_function_coverage=1 00:05:40.426 --rc genhtml_legend=1 00:05:40.426 --rc geninfo_all_blocks=1 00:05:40.426 --rc geninfo_unexecuted_blocks=1 00:05:40.426 00:05:40.426 ' 00:05:40.426 07:33:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:40.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.426 --rc genhtml_branch_coverage=1 00:05:40.426 --rc genhtml_function_coverage=1 00:05:40.426 --rc genhtml_legend=1 00:05:40.426 --rc geninfo_all_blocks=1 00:05:40.426 --rc geninfo_unexecuted_blocks=1 00:05:40.426 00:05:40.426 ' 00:05:40.426 07:33:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.426 07:33:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.426 07:33:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.426 07:33:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.426 07:33:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.426 07:33:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.426 07:33:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.426 07:33:05 -- paths/export.sh@5 -- # export PATH 00:05:40.426 07:33:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.426 07:33:05 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.685 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.685 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.946 07:33:06 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:40.946 07:33:06 -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:40.946 07:33:06 -- scripts/common.sh@311 -- # local bdf bdfs 00:05:40.946 07:33:06 -- scripts/common.sh@312 -- # local nvmes 00:05:40.946 07:33:06 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:05:40.946 07:33:06 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:40.946 07:33:06 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:05:40.946 07:33:06 -- scripts/common.sh@297 -- # local bdf= 00:05:40.946 07:33:06 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:05:40.946 07:33:06 -- scripts/common.sh@232 -- # local class 00:05:40.946 07:33:06 -- scripts/common.sh@233 -- # local subclass 00:05:40.946 07:33:06 -- scripts/common.sh@234 -- # local progif 00:05:40.946 07:33:06 -- scripts/common.sh@235 -- # printf %02x 1 00:05:40.946 07:33:06 -- scripts/common.sh@235 -- # class=01 00:05:40.946 07:33:06 -- scripts/common.sh@236 -- # printf %02x 8 00:05:40.946 07:33:06 -- scripts/common.sh@236 -- # subclass=08 00:05:40.946 07:33:06 -- scripts/common.sh@237 -- # printf %02x 2 00:05:40.946 07:33:06 -- scripts/common.sh@237 -- # progif=02 00:05:40.946 07:33:06 -- scripts/common.sh@239 -- # hash lspci 00:05:40.946 07:33:06 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:05:40.946 07:33:06 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:05:40.946 07:33:06 -- scripts/common.sh@242 -- # grep -i -- -p02 00:05:40.946 07:33:06 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:40.946 07:33:06 -- scripts/common.sh@244 -- # tr -d '"' 00:05:40.946 07:33:06 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.946 07:33:06 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:05:40.946 07:33:06 -- scripts/common.sh@15 -- # local i 00:05:40.946 07:33:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:05:40.946 07:33:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:40.946 07:33:06 -- scripts/common.sh@24 -- # return 0 00:05:40.946 07:33:06 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:05:40.946 07:33:06 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.946 07:33:06 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:05:40.946 07:33:06 -- scripts/common.sh@15 -- # local i 00:05:40.946 07:33:06 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:05:40.946 07:33:06 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:40.946 07:33:06 -- scripts/common.sh@24 -- # return 0 00:05:40.946 07:33:06 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:05:40.946 07:33:06 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:05:40.946 07:33:06 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:05:40.946 07:33:06 -- scripts/common.sh@322 -- # uname -s 00:05:40.946 07:33:06 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:05:40.946 07:33:06 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:05:40.946 07:33:06 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:05:40.946 07:33:06 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:05:40.946 07:33:06 -- scripts/common.sh@322 -- # uname -s 00:05:40.946 07:33:06 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:05:40.946 07:33:06 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:05:40.946 07:33:06 -- scripts/common.sh@327 -- # (( 2 )) 00:05:40.946 07:33:06 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:40.946 07:33:06 -- dd/dd.sh@13 -- # check_liburing 00:05:40.946 07:33:06 -- dd/common.sh@139 -- # local lib so 00:05:40.946 07:33:06 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:05:40.946 07:33:06 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:05:40.946 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.946 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:40.947 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.947 07:33:06 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:40.948 07:33:06 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:40.948 07:33:06 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:40.948 * spdk_dd linked to liburing 00:05:40.948 07:33:06 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:40.948 07:33:06 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:40.948 07:33:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:40.948 07:33:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:40.948 07:33:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:40.948 07:33:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:40.948 07:33:06 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:40.948 07:33:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:40.948 07:33:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:40.948 07:33:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:40.948 07:33:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:40.948 07:33:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:40.948 07:33:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:40.948 07:33:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:40.948 07:33:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:40.948 07:33:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:40.948 07:33:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:40.948 07:33:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:40.948 07:33:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:40.948 07:33:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:40.948 07:33:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:40.948 07:33:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:40.948 07:33:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:40.948 07:33:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:40.948 07:33:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:40.948 07:33:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:40.948 07:33:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:40.948 07:33:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:40.948 07:33:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:40.948 07:33:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:40.948 07:33:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:40.948 07:33:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:40.948 07:33:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:40.948 07:33:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:40.948 07:33:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:40.948 07:33:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:40.948 07:33:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:40.948 07:33:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:40.948 07:33:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:40.948 07:33:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:40.948 07:33:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:40.948 07:33:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:40.948 07:33:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:40.948 07:33:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:40.948 07:33:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:40.948 07:33:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:40.948 07:33:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:40.948 07:33:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:40.948 07:33:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:40.948 07:33:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:40.948 07:33:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:40.948 07:33:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:40.948 07:33:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:05:40.948 07:33:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:40.948 07:33:06 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:05:40.948 07:33:06 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:05:40.948 07:33:06 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:05:40.948 07:33:06 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:05:40.948 07:33:06 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:05:40.948 07:33:06 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:05:40.948 07:33:06 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:05:40.948 07:33:06 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:05:40.948 07:33:06 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:05:40.948 07:33:06 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:05:40.948 07:33:06 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:05:40.948 07:33:06 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:05:40.948 07:33:06 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:05:40.948 07:33:06 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:40.948 07:33:06 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:05:40.948 07:33:06 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:05:40.948 07:33:06 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:05:40.948 07:33:06 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:05:40.948 07:33:06 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:05:40.948 07:33:06 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:05:40.948 07:33:06 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:05:40.948 07:33:06 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:05:40.948 07:33:06 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:05:40.948 07:33:06 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:05:40.948 07:33:06 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:40.948 07:33:06 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:05:40.948 07:33:06 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:05:40.948 07:33:06 -- dd/common.sh@149 -- # [[ y != y ]] 00:05:40.948 07:33:06 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:05:40.948 07:33:06 -- dd/common.sh@156 -- # export liburing_in_use=1 00:05:40.948 07:33:06 -- dd/common.sh@156 -- # liburing_in_use=1 00:05:40.948 07:33:06 -- dd/common.sh@157 -- # return 0 00:05:40.948 07:33:06 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:40.949 07:33:06 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:05:40.949 07:33:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:40.949 07:33:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.949 07:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:40.949 ************************************ 00:05:40.949 START TEST spdk_dd_basic_rw 00:05:40.949 ************************************ 00:05:40.949 07:33:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:05:40.949 * Looking for test storage... 00:05:40.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:40.949 07:33:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:40.949 07:33:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:40.949 07:33:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.209 07:33:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.209 07:33:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.209 07:33:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.209 07:33:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.209 07:33:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.209 07:33:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.209 07:33:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.209 07:33:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.209 07:33:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.209 07:33:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.209 07:33:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.209 07:33:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.209 07:33:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.209 07:33:06 -- scripts/common.sh@344 -- # : 1 00:05:41.209 07:33:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.209 07:33:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.209 07:33:06 -- scripts/common.sh@364 -- # decimal 1 00:05:41.209 07:33:06 -- scripts/common.sh@352 -- # local d=1 00:05:41.209 07:33:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.209 07:33:06 -- scripts/common.sh@354 -- # echo 1 00:05:41.209 07:33:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.209 07:33:06 -- scripts/common.sh@365 -- # decimal 2 00:05:41.209 07:33:06 -- scripts/common.sh@352 -- # local d=2 00:05:41.209 07:33:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.209 07:33:06 -- scripts/common.sh@354 -- # echo 2 00:05:41.209 07:33:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.209 07:33:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.209 07:33:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.209 07:33:06 -- scripts/common.sh@367 -- # return 0 00:05:41.209 07:33:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.209 07:33:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.209 --rc genhtml_branch_coverage=1 00:05:41.209 --rc genhtml_function_coverage=1 00:05:41.209 --rc genhtml_legend=1 00:05:41.209 --rc geninfo_all_blocks=1 00:05:41.209 --rc geninfo_unexecuted_blocks=1 00:05:41.209 00:05:41.209 ' 00:05:41.209 07:33:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.209 --rc genhtml_branch_coverage=1 00:05:41.209 --rc genhtml_function_coverage=1 00:05:41.209 --rc genhtml_legend=1 00:05:41.209 --rc geninfo_all_blocks=1 00:05:41.209 --rc geninfo_unexecuted_blocks=1 00:05:41.209 00:05:41.209 ' 00:05:41.209 07:33:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.209 --rc genhtml_branch_coverage=1 00:05:41.209 --rc genhtml_function_coverage=1 00:05:41.209 --rc genhtml_legend=1 00:05:41.209 --rc geninfo_all_blocks=1 00:05:41.209 --rc geninfo_unexecuted_blocks=1 00:05:41.209 00:05:41.209 ' 00:05:41.209 07:33:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.210 --rc genhtml_branch_coverage=1 00:05:41.210 --rc genhtml_function_coverage=1 00:05:41.210 --rc genhtml_legend=1 00:05:41.210 --rc geninfo_all_blocks=1 00:05:41.210 --rc geninfo_unexecuted_blocks=1 00:05:41.210 00:05:41.210 ' 00:05:41.210 07:33:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.210 07:33:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.210 07:33:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.210 07:33:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.210 07:33:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.210 07:33:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.210 07:33:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.210 07:33:06 -- paths/export.sh@5 -- # export PATH 00:05:41.210 07:33:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.210 07:33:06 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:41.210 07:33:06 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:41.210 07:33:06 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:41.210 07:33:06 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:05:41.210 07:33:06 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:41.210 07:33:06 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:05:41.210 07:33:06 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:41.210 07:33:06 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:41.210 07:33:06 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.210 07:33:06 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:05:41.210 07:33:06 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:05:41.210 07:33:06 -- dd/common.sh@126 -- # mapfile -t id 00:05:41.210 07:33:06 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:05:41.472 07:33:06 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 102 Data Units Written: 9 Host Read Commands: 2333 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:41.472 07:33:06 -- dd/common.sh@130 -- # lbaf=04 00:05:41.473 07:33:06 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 102 Data Units Written: 9 Host Read Commands: 2333 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:41.473 07:33:06 -- dd/common.sh@132 -- # lbaf=4096 00:05:41.473 07:33:06 -- dd/common.sh@134 -- # echo 4096 00:05:41.473 07:33:06 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:41.473 07:33:06 -- dd/basic_rw.sh@96 -- # : 00:05:41.473 07:33:06 -- dd/basic_rw.sh@96 -- # gen_conf 00:05:41.473 07:33:06 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.473 07:33:06 -- dd/common.sh@31 -- # xtrace_disable 00:05:41.473 07:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.473 07:33:06 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:41.473 07:33:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.473 07:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.473 ************************************ 00:05:41.473 START TEST dd_bs_lt_native_bs 00:05:41.473 ************************************ 00:05:41.473 07:33:06 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.473 07:33:06 -- common/autotest_common.sh@650 -- # local es=0 00:05:41.473 07:33:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.473 07:33:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.473 07:33:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.473 07:33:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.473 07:33:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.473 07:33:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.473 07:33:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.473 07:33:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.473 07:33:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.473 07:33:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.473 { 00:05:41.473 "subsystems": [ 00:05:41.473 { 00:05:41.473 "subsystem": "bdev", 00:05:41.473 "config": [ 00:05:41.473 { 00:05:41.473 "params": { 00:05:41.473 "trtype": "pcie", 00:05:41.473 "traddr": "0000:00:06.0", 00:05:41.473 "name": "Nvme0" 00:05:41.473 }, 00:05:41.473 "method": "bdev_nvme_attach_controller" 00:05:41.473 }, 00:05:41.473 { 00:05:41.473 "method": "bdev_wait_for_examine" 00:05:41.473 } 00:05:41.473 ] 00:05:41.473 } 00:05:41.473 ] 00:05:41.473 } 00:05:41.473 [2024-12-02 07:33:06.899964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.473 [2024-12-02 07:33:06.900056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57670 ] 00:05:41.473 [2024-12-02 07:33:07.036941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.473 [2024-12-02 07:33:07.089940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.733 [2024-12-02 07:33:07.199519] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:41.733 [2024-12-02 07:33:07.199604] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.733 [2024-12-02 07:33:07.261370] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:05:41.733 07:33:07 -- common/autotest_common.sh@653 -- # es=234 00:05:41.733 07:33:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.733 07:33:07 -- common/autotest_common.sh@662 -- # es=106 00:05:41.733 07:33:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:41.733 07:33:07 -- common/autotest_common.sh@670 -- # es=1 00:05:41.733 07:33:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.733 00:05:41.733 real 0m0.506s 00:05:41.733 user 0m0.345s 00:05:41.733 sys 0m0.115s 00:05:41.733 07:33:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.733 07:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:41.733 ************************************ 00:05:41.733 END TEST dd_bs_lt_native_bs 00:05:41.733 ************************************ 00:05:41.993 07:33:07 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:41.993 07:33:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:41.993 07:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.993 07:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:41.993 ************************************ 00:05:41.993 START TEST dd_rw 00:05:41.993 ************************************ 00:05:41.993 07:33:07 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:05:41.993 07:33:07 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:41.993 07:33:07 -- dd/basic_rw.sh@12 -- # local count size 00:05:41.993 07:33:07 -- dd/basic_rw.sh@13 -- # local qds bss 00:05:41.993 07:33:07 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:41.993 07:33:07 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:41.993 07:33:07 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:41.993 07:33:07 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:41.993 07:33:07 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:41.993 07:33:07 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:41.993 07:33:07 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:41.993 07:33:07 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:41.993 07:33:07 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:41.993 07:33:07 -- dd/basic_rw.sh@23 -- # count=15 00:05:41.993 07:33:07 -- dd/basic_rw.sh@24 -- # count=15 00:05:41.993 07:33:07 -- dd/basic_rw.sh@25 -- # size=61440 00:05:41.993 07:33:07 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:41.993 07:33:07 -- dd/common.sh@98 -- # xtrace_disable 00:05:41.993 07:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:42.562 07:33:07 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:42.562 07:33:07 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:42.562 07:33:07 -- dd/common.sh@31 -- # xtrace_disable 00:05:42.563 07:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:42.563 [2024-12-02 07:33:08.041420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.563 [2024-12-02 07:33:08.041512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57702 ] 00:05:42.563 { 00:05:42.563 "subsystems": [ 00:05:42.563 { 00:05:42.563 "subsystem": "bdev", 00:05:42.563 "config": [ 00:05:42.563 { 00:05:42.563 "params": { 00:05:42.563 "trtype": "pcie", 00:05:42.563 "traddr": "0000:00:06.0", 00:05:42.563 "name": "Nvme0" 00:05:42.563 }, 00:05:42.563 "method": "bdev_nvme_attach_controller" 00:05:42.563 }, 00:05:42.563 { 00:05:42.563 "method": "bdev_wait_for_examine" 00:05:42.563 } 00:05:42.563 ] 00:05:42.563 } 00:05:42.563 ] 00:05:42.563 } 00:05:42.563 [2024-12-02 07:33:08.178972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.823 [2024-12-02 07:33:08.231013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.823  [2024-12-02T07:33:08.706Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:43.082 00:05:43.082 07:33:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:43.082 07:33:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:43.082 07:33:08 -- dd/common.sh@31 -- # xtrace_disable 00:05:43.082 07:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:43.082 [2024-12-02 07:33:08.557982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.083 [2024-12-02 07:33:08.558074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57715 ] 00:05:43.083 { 00:05:43.083 "subsystems": [ 00:05:43.083 { 00:05:43.083 "subsystem": "bdev", 00:05:43.083 "config": [ 00:05:43.083 { 00:05:43.083 "params": { 00:05:43.083 "trtype": "pcie", 00:05:43.083 "traddr": "0000:00:06.0", 00:05:43.083 "name": "Nvme0" 00:05:43.083 }, 00:05:43.083 "method": "bdev_nvme_attach_controller" 00:05:43.083 }, 00:05:43.083 { 00:05:43.083 "method": "bdev_wait_for_examine" 00:05:43.083 } 00:05:43.083 ] 00:05:43.083 } 00:05:43.083 ] 00:05:43.083 } 00:05:43.083 [2024-12-02 07:33:08.695088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.342 [2024-12-02 07:33:08.742232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.342  [2024-12-02T07:33:09.226Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:43.602 00:05:43.602 07:33:09 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.602 07:33:09 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:43.602 07:33:09 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:43.602 07:33:09 -- dd/common.sh@11 -- # local nvme_ref= 00:05:43.602 07:33:09 -- dd/common.sh@12 -- # local size=61440 00:05:43.602 07:33:09 -- dd/common.sh@14 -- # local bs=1048576 00:05:43.602 07:33:09 -- dd/common.sh@15 -- # local count=1 00:05:43.602 07:33:09 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:43.602 07:33:09 -- dd/common.sh@18 -- # gen_conf 00:05:43.602 07:33:09 -- dd/common.sh@31 -- # xtrace_disable 00:05:43.602 07:33:09 -- common/autotest_common.sh@10 -- # set +x 00:05:43.602 [2024-12-02 07:33:09.062522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.602 [2024-12-02 07:33:09.062615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57728 ] 00:05:43.602 { 00:05:43.602 "subsystems": [ 00:05:43.602 { 00:05:43.602 "subsystem": "bdev", 00:05:43.602 "config": [ 00:05:43.602 { 00:05:43.602 "params": { 00:05:43.602 "trtype": "pcie", 00:05:43.602 "traddr": "0000:00:06.0", 00:05:43.602 "name": "Nvme0" 00:05:43.602 }, 00:05:43.602 "method": "bdev_nvme_attach_controller" 00:05:43.602 }, 00:05:43.602 { 00:05:43.602 "method": "bdev_wait_for_examine" 00:05:43.602 } 00:05:43.602 ] 00:05:43.602 } 00:05:43.602 ] 00:05:43.602 } 00:05:43.602 [2024-12-02 07:33:09.191424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.861 [2024-12-02 07:33:09.239404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.861  [2024-12-02T07:33:09.745Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:44.121 00:05:44.121 07:33:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:44.121 07:33:09 -- dd/basic_rw.sh@23 -- # count=15 00:05:44.121 07:33:09 -- dd/basic_rw.sh@24 -- # count=15 00:05:44.121 07:33:09 -- dd/basic_rw.sh@25 -- # size=61440 00:05:44.121 07:33:09 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:44.121 07:33:09 -- dd/common.sh@98 -- # xtrace_disable 00:05:44.121 07:33:09 -- common/autotest_common.sh@10 -- # set +x 00:05:44.691 07:33:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:44.691 07:33:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:44.691 07:33:10 -- dd/common.sh@31 -- # xtrace_disable 00:05:44.691 07:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.691 [2024-12-02 07:33:10.142035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.691 [2024-12-02 07:33:10.142134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57747 ] 00:05:44.691 { 00:05:44.691 "subsystems": [ 00:05:44.691 { 00:05:44.691 "subsystem": "bdev", 00:05:44.691 "config": [ 00:05:44.691 { 00:05:44.691 "params": { 00:05:44.691 "trtype": "pcie", 00:05:44.691 "traddr": "0000:00:06.0", 00:05:44.691 "name": "Nvme0" 00:05:44.691 }, 00:05:44.691 "method": "bdev_nvme_attach_controller" 00:05:44.691 }, 00:05:44.691 { 00:05:44.691 "method": "bdev_wait_for_examine" 00:05:44.691 } 00:05:44.691 ] 00:05:44.691 } 00:05:44.691 ] 00:05:44.691 } 00:05:44.691 [2024-12-02 07:33:10.278973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.950 [2024-12-02 07:33:10.330579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.950  [2024-12-02T07:33:10.833Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:45.209 00:05:45.209 07:33:10 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:45.209 07:33:10 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:45.209 07:33:10 -- dd/common.sh@31 -- # xtrace_disable 00:05:45.209 07:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:45.209 [2024-12-02 07:33:10.642021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.209 [2024-12-02 07:33:10.642114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57761 ] 00:05:45.209 { 00:05:45.209 "subsystems": [ 00:05:45.209 { 00:05:45.209 "subsystem": "bdev", 00:05:45.209 "config": [ 00:05:45.209 { 00:05:45.209 "params": { 00:05:45.209 "trtype": "pcie", 00:05:45.209 "traddr": "0000:00:06.0", 00:05:45.209 "name": "Nvme0" 00:05:45.209 }, 00:05:45.209 "method": "bdev_nvme_attach_controller" 00:05:45.209 }, 00:05:45.209 { 00:05:45.209 "method": "bdev_wait_for_examine" 00:05:45.209 } 00:05:45.209 ] 00:05:45.209 } 00:05:45.210 ] 00:05:45.210 } 00:05:45.210 [2024-12-02 07:33:10.763929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.210 [2024-12-02 07:33:10.809555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.469  [2024-12-02T07:33:11.093Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:45.469 00:05:45.469 07:33:11 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.729 07:33:11 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:45.729 07:33:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.729 07:33:11 -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.729 07:33:11 -- dd/common.sh@12 -- # local size=61440 00:05:45.729 07:33:11 -- dd/common.sh@14 -- # local bs=1048576 00:05:45.729 07:33:11 -- dd/common.sh@15 -- # local count=1 00:05:45.729 07:33:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.729 07:33:11 -- dd/common.sh@18 -- # gen_conf 00:05:45.729 07:33:11 -- dd/common.sh@31 -- # xtrace_disable 00:05:45.729 07:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:45.729 [2024-12-02 07:33:11.136005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.729 [2024-12-02 07:33:11.136084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57774 ] 00:05:45.729 { 00:05:45.729 "subsystems": [ 00:05:45.729 { 00:05:45.729 "subsystem": "bdev", 00:05:45.729 "config": [ 00:05:45.729 { 00:05:45.729 "params": { 00:05:45.729 "trtype": "pcie", 00:05:45.729 "traddr": "0000:00:06.0", 00:05:45.729 "name": "Nvme0" 00:05:45.729 }, 00:05:45.729 "method": "bdev_nvme_attach_controller" 00:05:45.729 }, 00:05:45.729 { 00:05:45.729 "method": "bdev_wait_for_examine" 00:05:45.729 } 00:05:45.729 ] 00:05:45.729 } 00:05:45.729 ] 00:05:45.729 } 00:05:45.729 [2024-12-02 07:33:11.264420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.729 [2024-12-02 07:33:11.311399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.989  [2024-12-02T07:33:11.613Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:45.989 00:05:45.989 07:33:11 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:45.989 07:33:11 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:45.989 07:33:11 -- dd/basic_rw.sh@23 -- # count=7 00:05:45.989 07:33:11 -- dd/basic_rw.sh@24 -- # count=7 00:05:45.989 07:33:11 -- dd/basic_rw.sh@25 -- # size=57344 00:05:45.989 07:33:11 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:45.989 07:33:11 -- dd/common.sh@98 -- # xtrace_disable 00:05:45.989 07:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.556 07:33:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:46.556 07:33:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.556 07:33:12 -- dd/common.sh@31 -- # xtrace_disable 00:05:46.556 07:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:46.556 [2024-12-02 07:33:12.146645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.556 [2024-12-02 07:33:12.147324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57788 ] 00:05:46.557 { 00:05:46.557 "subsystems": [ 00:05:46.557 { 00:05:46.557 "subsystem": "bdev", 00:05:46.557 "config": [ 00:05:46.557 { 00:05:46.557 "params": { 00:05:46.557 "trtype": "pcie", 00:05:46.557 "traddr": "0000:00:06.0", 00:05:46.557 "name": "Nvme0" 00:05:46.557 }, 00:05:46.557 "method": "bdev_nvme_attach_controller" 00:05:46.557 }, 00:05:46.557 { 00:05:46.557 "method": "bdev_wait_for_examine" 00:05:46.557 } 00:05:46.557 ] 00:05:46.557 } 00:05:46.557 ] 00:05:46.557 } 00:05:46.815 [2024-12-02 07:33:12.283491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.815 [2024-12-02 07:33:12.331707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.815  [2024-12-02T07:33:12.698Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:47.074 00:05:47.074 07:33:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:47.074 07:33:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:47.074 07:33:12 -- dd/common.sh@31 -- # xtrace_disable 00:05:47.074 07:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:47.074 [2024-12-02 07:33:12.658402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.074 [2024-12-02 07:33:12.658496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57805 ] 00:05:47.074 { 00:05:47.074 "subsystems": [ 00:05:47.074 { 00:05:47.074 "subsystem": "bdev", 00:05:47.074 "config": [ 00:05:47.074 { 00:05:47.074 "params": { 00:05:47.074 "trtype": "pcie", 00:05:47.074 "traddr": "0000:00:06.0", 00:05:47.074 "name": "Nvme0" 00:05:47.074 }, 00:05:47.074 "method": "bdev_nvme_attach_controller" 00:05:47.074 }, 00:05:47.074 { 00:05:47.074 "method": "bdev_wait_for_examine" 00:05:47.074 } 00:05:47.074 ] 00:05:47.074 } 00:05:47.074 ] 00:05:47.074 } 00:05:47.333 [2024-12-02 07:33:12.795073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.333 [2024-12-02 07:33:12.841671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.333  [2024-12-02T07:33:13.216Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:47.592 00:05:47.592 07:33:13 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.592 07:33:13 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:47.592 07:33:13 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:47.592 07:33:13 -- dd/common.sh@11 -- # local nvme_ref= 00:05:47.592 07:33:13 -- dd/common.sh@12 -- # local size=57344 00:05:47.592 07:33:13 -- dd/common.sh@14 -- # local bs=1048576 00:05:47.592 07:33:13 -- dd/common.sh@15 -- # local count=1 00:05:47.592 07:33:13 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:47.592 07:33:13 -- dd/common.sh@18 -- # gen_conf 00:05:47.592 07:33:13 -- dd/common.sh@31 -- # xtrace_disable 00:05:47.592 07:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:47.592 [2024-12-02 07:33:13.167355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.592 [2024-12-02 07:33:13.167445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57813 ] 00:05:47.592 { 00:05:47.592 "subsystems": [ 00:05:47.592 { 00:05:47.592 "subsystem": "bdev", 00:05:47.592 "config": [ 00:05:47.592 { 00:05:47.592 "params": { 00:05:47.592 "trtype": "pcie", 00:05:47.592 "traddr": "0000:00:06.0", 00:05:47.592 "name": "Nvme0" 00:05:47.592 }, 00:05:47.592 "method": "bdev_nvme_attach_controller" 00:05:47.592 }, 00:05:47.592 { 00:05:47.592 "method": "bdev_wait_for_examine" 00:05:47.592 } 00:05:47.592 ] 00:05:47.593 } 00:05:47.593 ] 00:05:47.593 } 00:05:47.851 [2024-12-02 07:33:13.303200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.851 [2024-12-02 07:33:13.349960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.851  [2024-12-02T07:33:13.745Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:48.121 00:05:48.121 07:33:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:48.121 07:33:13 -- dd/basic_rw.sh@23 -- # count=7 00:05:48.121 07:33:13 -- dd/basic_rw.sh@24 -- # count=7 00:05:48.121 07:33:13 -- dd/basic_rw.sh@25 -- # size=57344 00:05:48.121 07:33:13 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:48.121 07:33:13 -- dd/common.sh@98 -- # xtrace_disable 00:05:48.121 07:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:48.688 07:33:14 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:48.688 07:33:14 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:48.688 07:33:14 -- dd/common.sh@31 -- # xtrace_disable 00:05:48.688 07:33:14 -- common/autotest_common.sh@10 -- # set +x 00:05:48.688 [2024-12-02 07:33:14.185878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.688 [2024-12-02 07:33:14.185970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57831 ] 00:05:48.688 { 00:05:48.688 "subsystems": [ 00:05:48.688 { 00:05:48.688 "subsystem": "bdev", 00:05:48.688 "config": [ 00:05:48.688 { 00:05:48.688 "params": { 00:05:48.688 "trtype": "pcie", 00:05:48.688 "traddr": "0000:00:06.0", 00:05:48.688 "name": "Nvme0" 00:05:48.688 }, 00:05:48.688 "method": "bdev_nvme_attach_controller" 00:05:48.688 }, 00:05:48.688 { 00:05:48.688 "method": "bdev_wait_for_examine" 00:05:48.688 } 00:05:48.688 ] 00:05:48.688 } 00:05:48.688 ] 00:05:48.688 } 00:05:48.948 [2024-12-02 07:33:14.318043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.948 [2024-12-02 07:33:14.365847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.948  [2024-12-02T07:33:14.831Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:49.207 00:05:49.207 07:33:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:49.207 07:33:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:49.207 07:33:14 -- dd/common.sh@31 -- # xtrace_disable 00:05:49.207 07:33:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.207 [2024-12-02 07:33:14.695114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.207 [2024-12-02 07:33:14.695208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57849 ] 00:05:49.207 { 00:05:49.207 "subsystems": [ 00:05:49.207 { 00:05:49.207 "subsystem": "bdev", 00:05:49.207 "config": [ 00:05:49.207 { 00:05:49.207 "params": { 00:05:49.207 "trtype": "pcie", 00:05:49.207 "traddr": "0000:00:06.0", 00:05:49.207 "name": "Nvme0" 00:05:49.207 }, 00:05:49.207 "method": "bdev_nvme_attach_controller" 00:05:49.207 }, 00:05:49.207 { 00:05:49.207 "method": "bdev_wait_for_examine" 00:05:49.207 } 00:05:49.207 ] 00:05:49.207 } 00:05:49.207 ] 00:05:49.207 } 00:05:49.466 [2024-12-02 07:33:14.832145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.466 [2024-12-02 07:33:14.878678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.466  [2024-12-02T07:33:15.349Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:49.725 00:05:49.725 07:33:15 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.725 07:33:15 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:49.725 07:33:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:49.725 07:33:15 -- dd/common.sh@11 -- # local nvme_ref= 00:05:49.725 07:33:15 -- dd/common.sh@12 -- # local size=57344 00:05:49.725 07:33:15 -- dd/common.sh@14 -- # local bs=1048576 00:05:49.725 07:33:15 -- dd/common.sh@15 -- # local count=1 00:05:49.725 07:33:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:49.725 07:33:15 -- dd/common.sh@18 -- # gen_conf 00:05:49.725 07:33:15 -- dd/common.sh@31 -- # xtrace_disable 00:05:49.725 07:33:15 -- common/autotest_common.sh@10 -- # set +x 00:05:49.725 { 00:05:49.725 "subsystems": [ 00:05:49.725 { 00:05:49.725 "subsystem": "bdev", 00:05:49.725 "config": [ 00:05:49.725 { 00:05:49.725 "params": { 00:05:49.725 "trtype": "pcie", 00:05:49.725 "traddr": "0000:00:06.0", 00:05:49.725 "name": "Nvme0" 00:05:49.725 }, 00:05:49.725 "method": "bdev_nvme_attach_controller" 00:05:49.725 }, 00:05:49.725 { 00:05:49.725 "method": "bdev_wait_for_examine" 00:05:49.725 } 00:05:49.725 ] 00:05:49.725 } 00:05:49.725 ] 00:05:49.725 } 00:05:49.725 [2024-12-02 07:33:15.211473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.725 [2024-12-02 07:33:15.211576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57857 ] 00:05:49.725 [2024-12-02 07:33:15.345628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.983 [2024-12-02 07:33:15.392230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.983  [2024-12-02T07:33:15.866Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:50.242 00:05:50.242 07:33:15 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:50.242 07:33:15 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:50.242 07:33:15 -- dd/basic_rw.sh@23 -- # count=3 00:05:50.242 07:33:15 -- dd/basic_rw.sh@24 -- # count=3 00:05:50.242 07:33:15 -- dd/basic_rw.sh@25 -- # size=49152 00:05:50.242 07:33:15 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:50.242 07:33:15 -- dd/common.sh@98 -- # xtrace_disable 00:05:50.242 07:33:15 -- common/autotest_common.sh@10 -- # set +x 00:05:50.500 07:33:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:50.501 07:33:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:50.501 07:33:16 -- dd/common.sh@31 -- # xtrace_disable 00:05:50.501 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:05:50.760 [2024-12-02 07:33:16.157569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.760 [2024-12-02 07:33:16.157658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57875 ] 00:05:50.760 { 00:05:50.760 "subsystems": [ 00:05:50.760 { 00:05:50.760 "subsystem": "bdev", 00:05:50.760 "config": [ 00:05:50.760 { 00:05:50.760 "params": { 00:05:50.760 "trtype": "pcie", 00:05:50.760 "traddr": "0000:00:06.0", 00:05:50.760 "name": "Nvme0" 00:05:50.760 }, 00:05:50.760 "method": "bdev_nvme_attach_controller" 00:05:50.760 }, 00:05:50.760 { 00:05:50.760 "method": "bdev_wait_for_examine" 00:05:50.760 } 00:05:50.760 ] 00:05:50.760 } 00:05:50.760 ] 00:05:50.760 } 00:05:50.760 [2024-12-02 07:33:16.296070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.760 [2024-12-02 07:33:16.362426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.020  [2024-12-02T07:33:16.644Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:51.020 00:05:51.280 07:33:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:51.280 07:33:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:51.280 07:33:16 -- dd/common.sh@31 -- # xtrace_disable 00:05:51.280 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:05:51.280 [2024-12-02 07:33:16.696410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.280 [2024-12-02 07:33:16.696512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57893 ] 00:05:51.280 { 00:05:51.280 "subsystems": [ 00:05:51.280 { 00:05:51.280 "subsystem": "bdev", 00:05:51.280 "config": [ 00:05:51.280 { 00:05:51.280 "params": { 00:05:51.280 "trtype": "pcie", 00:05:51.280 "traddr": "0000:00:06.0", 00:05:51.280 "name": "Nvme0" 00:05:51.280 }, 00:05:51.280 "method": "bdev_nvme_attach_controller" 00:05:51.280 }, 00:05:51.280 { 00:05:51.280 "method": "bdev_wait_for_examine" 00:05:51.280 } 00:05:51.280 ] 00:05:51.280 } 00:05:51.280 ] 00:05:51.280 } 00:05:51.280 [2024-12-02 07:33:16.830976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.280 [2024-12-02 07:33:16.878202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.539  [2024-12-02T07:33:17.163Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:51.539 00:05:51.539 07:33:17 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.539 07:33:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:51.539 07:33:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.539 07:33:17 -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.539 07:33:17 -- dd/common.sh@12 -- # local size=49152 00:05:51.539 07:33:17 -- dd/common.sh@14 -- # local bs=1048576 00:05:51.539 07:33:17 -- dd/common.sh@15 -- # local count=1 00:05:51.539 07:33:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.539 07:33:17 -- dd/common.sh@18 -- # gen_conf 00:05:51.539 07:33:17 -- dd/common.sh@31 -- # xtrace_disable 00:05:51.539 07:33:17 -- common/autotest_common.sh@10 -- # set +x 00:05:51.799 [2024-12-02 07:33:17.205370] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.799 [2024-12-02 07:33:17.205455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57901 ] 00:05:51.799 { 00:05:51.799 "subsystems": [ 00:05:51.799 { 00:05:51.799 "subsystem": "bdev", 00:05:51.799 "config": [ 00:05:51.799 { 00:05:51.799 "params": { 00:05:51.799 "trtype": "pcie", 00:05:51.799 "traddr": "0000:00:06.0", 00:05:51.799 "name": "Nvme0" 00:05:51.799 }, 00:05:51.799 "method": "bdev_nvme_attach_controller" 00:05:51.799 }, 00:05:51.799 { 00:05:51.799 "method": "bdev_wait_for_examine" 00:05:51.799 } 00:05:51.799 ] 00:05:51.799 } 00:05:51.799 ] 00:05:51.799 } 00:05:51.799 [2024-12-02 07:33:17.342667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.799 [2024-12-02 07:33:17.408399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.058  [2024-12-02T07:33:17.941Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:52.317 00:05:52.317 07:33:17 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:52.317 07:33:17 -- dd/basic_rw.sh@23 -- # count=3 00:05:52.317 07:33:17 -- dd/basic_rw.sh@24 -- # count=3 00:05:52.317 07:33:17 -- dd/basic_rw.sh@25 -- # size=49152 00:05:52.317 07:33:17 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:52.318 07:33:17 -- dd/common.sh@98 -- # xtrace_disable 00:05:52.318 07:33:17 -- common/autotest_common.sh@10 -- # set +x 00:05:52.583 07:33:18 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:52.583 07:33:18 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:52.583 07:33:18 -- dd/common.sh@31 -- # xtrace_disable 00:05:52.583 07:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:52.583 [2024-12-02 07:33:18.186351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.583 [2024-12-02 07:33:18.186450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57919 ] 00:05:52.583 { 00:05:52.583 "subsystems": [ 00:05:52.583 { 00:05:52.584 "subsystem": "bdev", 00:05:52.584 "config": [ 00:05:52.584 { 00:05:52.584 "params": { 00:05:52.584 "trtype": "pcie", 00:05:52.584 "traddr": "0000:00:06.0", 00:05:52.584 "name": "Nvme0" 00:05:52.584 }, 00:05:52.584 "method": "bdev_nvme_attach_controller" 00:05:52.584 }, 00:05:52.584 { 00:05:52.584 "method": "bdev_wait_for_examine" 00:05:52.584 } 00:05:52.584 ] 00:05:52.584 } 00:05:52.584 ] 00:05:52.584 } 00:05:52.858 [2024-12-02 07:33:18.322963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.858 [2024-12-02 07:33:18.373829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.152  [2024-12-02T07:33:18.776Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:53.152 00:05:53.152 07:33:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:53.152 07:33:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:53.152 07:33:18 -- dd/common.sh@31 -- # xtrace_disable 00:05:53.152 07:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:53.152 [2024-12-02 07:33:18.702880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.152 [2024-12-02 07:33:18.702967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57937 ] 00:05:53.152 { 00:05:53.152 "subsystems": [ 00:05:53.152 { 00:05:53.153 "subsystem": "bdev", 00:05:53.153 "config": [ 00:05:53.153 { 00:05:53.153 "params": { 00:05:53.153 "trtype": "pcie", 00:05:53.153 "traddr": "0000:00:06.0", 00:05:53.153 "name": "Nvme0" 00:05:53.153 }, 00:05:53.153 "method": "bdev_nvme_attach_controller" 00:05:53.153 }, 00:05:53.153 { 00:05:53.153 "method": "bdev_wait_for_examine" 00:05:53.153 } 00:05:53.153 ] 00:05:53.153 } 00:05:53.153 ] 00:05:53.153 } 00:05:53.412 [2024-12-02 07:33:18.838359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.412 [2024-12-02 07:33:18.886858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.412  [2024-12-02T07:33:19.295Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:53.671 00:05:53.671 07:33:19 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:53.671 07:33:19 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:53.671 07:33:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:53.671 07:33:19 -- dd/common.sh@11 -- # local nvme_ref= 00:05:53.671 07:33:19 -- dd/common.sh@12 -- # local size=49152 00:05:53.671 07:33:19 -- dd/common.sh@14 -- # local bs=1048576 00:05:53.671 07:33:19 -- dd/common.sh@15 -- # local count=1 00:05:53.671 07:33:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:53.671 07:33:19 -- dd/common.sh@18 -- # gen_conf 00:05:53.671 07:33:19 -- dd/common.sh@31 -- # xtrace_disable 00:05:53.671 07:33:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.671 [2024-12-02 07:33:19.229507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.671 [2024-12-02 07:33:19.229632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57945 ] 00:05:53.671 { 00:05:53.671 "subsystems": [ 00:05:53.671 { 00:05:53.671 "subsystem": "bdev", 00:05:53.671 "config": [ 00:05:53.671 { 00:05:53.671 "params": { 00:05:53.671 "trtype": "pcie", 00:05:53.671 "traddr": "0000:00:06.0", 00:05:53.671 "name": "Nvme0" 00:05:53.671 }, 00:05:53.671 "method": "bdev_nvme_attach_controller" 00:05:53.671 }, 00:05:53.671 { 00:05:53.671 "method": "bdev_wait_for_examine" 00:05:53.671 } 00:05:53.671 ] 00:05:53.671 } 00:05:53.671 ] 00:05:53.671 } 00:05:53.930 [2024-12-02 07:33:19.370745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.930 [2024-12-02 07:33:19.418250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.930  [2024-12-02T07:33:19.814Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:54.190 00:05:54.190 ************************************ 00:05:54.190 END TEST dd_rw 00:05:54.190 ************************************ 00:05:54.190 00:05:54.190 real 0m12.299s 00:05:54.190 user 0m9.188s 00:05:54.190 sys 0m2.049s 00:05:54.190 07:33:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.190 07:33:19 -- common/autotest_common.sh@10 -- # set +x 00:05:54.190 07:33:19 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:54.190 07:33:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.190 07:33:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.190 07:33:19 -- common/autotest_common.sh@10 -- # set +x 00:05:54.190 ************************************ 00:05:54.190 START TEST dd_rw_offset 00:05:54.190 ************************************ 00:05:54.190 07:33:19 -- common/autotest_common.sh@1114 -- # basic_offset 00:05:54.190 07:33:19 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:54.190 07:33:19 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:54.190 07:33:19 -- dd/common.sh@98 -- # xtrace_disable 00:05:54.190 07:33:19 -- common/autotest_common.sh@10 -- # set +x 00:05:54.190 07:33:19 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:54.450 07:33:19 -- dd/basic_rw.sh@56 -- # data=llgqpmvf3yrz31nve0lccc9ryasbktn1k1b7923mil2x14bvifujgfu3z19e8oofht0hcyem363u0lre3abb6fnnre8grjm0rqdiox4wm2htgbm3eswy1o7cqaqfk6vljnkexkm5s0gqrzx0yeta33cf19zkgcscz8uwrb5wzzryizmtaklpc7xn29ezt5xvf7pkzzd25xzlumpz8fd8w0ovk9qx8jfawyc9eav4dcvikpf0cta8r0tnuqr9s0yk57fgynvuwmy356orydr2bmq7o4uzs4ocv97c6ouj693pu15cn9kglffzyytmsqqvqec5hhlwcy0sbyixbv4dyllesn45x04whqae69gib79viusdjzi2trlb05vgj4cuhe31fvr013fs5f22asfvawjcphv6kivb8k89w7f9ox4tvs7ww0ylmfhck9x9161e4mxgeswxkgse09ex7pfkloixrcen3y54wpks0ic6lg7ggmwvuhqvbktdd0fbga9mq8a79xt3jez9zux2xi65wu691tauthpu142djheynyvahkhodiweef85lnqra0kki66l03f4bi9ws656wk6igjhn4b0xzv26sckzmha95bu7x2qjv9opvefb4yj3im24s7oa3mcq3pcsktjc7v9z8sd67l4qfrzkua2frqv6qrgsjepi46w308pq3ebeimc036lfq6hpvgsyl8sljnm4w3j9v3f2qk48ige0u39cvwcbvhwgj5fcq3c721j60ihyenn1xrikt989h3n3n2p8w8nuo35rs9mhty8hmi1z08lgs84bytgifdqpb1kav6zvd77q4h103rfcl4bgtkuxzhibhq4dituv2a4xj5wjz4maj6v4f3btx6k38tc94zevzt7waivnsmmf8gb0ee5xncw12m19xg8lrre5d9d017r0u9rvzfkwcqmh4g9e5mtihb4uuq3083gu6v4s6hobaz2njvgknla4ruqx4xe9jxgvryjxyjgbrqzurl6i5f0z3aytkckn76wa3a2lde1ni76yq5jkjpsl9zuuzaxe7o8hfaovjmuztl99pmbx4ocmbk94n8znijqdf5omkn5ngc5oed044eu8qicmaokmiri4b5z4246hlwpmzdmlhwfy0r9ynp96w49l51x226czz0eckiiv9te5kw83q6du1piadwoxs3l5cvg9w4py5g0gp734d8me6o2ec4iyc69svxwpsqjm8wrzpg099pqyvmy0bhtc0wl3joei8jpuwrbesfdhpmt1qn9zmrgiv0wfdkgc3nvs27eq8db98oa9ebpsz6skd0is5y11jthe6n8oqgbhoiyip0nq5kez5va7qivk71i3le2nfx82o3mtn6rmuk722wxil377eefq82govq83mi1y59uuk6x2bdb776kybpezzovqgo59rtmyvzkwhsadebglbravf4ktluq1skkfuvh9g9csf3vv6gpyy42tcz81571dq5u9xx8if30imkip7qt0wb0gyjlpsp8wjpqd976l0x4xggtrjbk365x5ad1mrhfdy09c2kj0phf36n0m60z4ihpsnj136c9m7sesc46s02p45si8sa9z7pd7j4gwdykig2vv68d9l4vj50hwud84avfjzych9h7u2vty8y9h7e47dr4yua2ql8g6hsk91huhrr00ze6jcedtixs3hl7bi875yid258ce4gayorppvwfni682ahufyvbpm74653p06ysl1bxdzlnmpevea66nx6y99ruelqmiiaznlyvhhbhfeywtuenaiagpeoawnt1aybrphgkwwspbfxy1p2rn5n1p6e9piiw4z3xyobx6vd4zhpasssa3cdipldm7irfjiflwiro37unbm9k655ywhaakvpk4k5ey7jpvirgnemkjitqhgmruiseoeolzzfa1bach6i5u53ayczd6rxpbwboygpo5qpc7vbi5tyu0r6ftto5ieyagipyjz1zl3bxkfz0qvf5kspbew308wf6tjrxj34n5qs1hjll86s2ccrjf220mi8npfvw01zcv8t4h68w6r5qlz26zwnc8t8g6xubuzx19jssvkwg2k87h3vzbfe0xpc949vzqckafhlter012x6vn4w13oyj1u27oft3bnt5g616uk60n2duq86qljw8fmcso0rj6d0ob98byqouobriezfexhbdf7a7gl6svg7t877a2yk9f4xbdmezll88jwyp97kfylbwee2ej31aii7gvyhxgq52uqxsb3sk0o4e7sec14lr42wxl19q00urlkkapce9ovrk8d3s86ax4uftbif3kvhynht75gqk3utrimvkvl7h7oc58vfilxaa19mi9mte6g8b5zv4257c0p5vvgf7f87ojgs71ks4nfddrmqutdwm7myzb0wxyxjskxxtipf7ue2w52zj1emb7pj037yjaefqj8yap2g1swvojglnr1a5el5vpi0mn7dimg86sudoesp2wvx1zsmofw6v1tijoy7bflz6l0zuqctxeevje0cdcz84301eeejhxkovxxr27g9mk2z9tbojsimr042oxdivr265p54h6lorfaqamcppzrzay2tt4v310o4fch9k85lq11102tqgrhgyymbskq46agsvrnqrl20xy2qg1g0k3vjrg7ndy6mbs4yg85n8qob591xr2j8j13ku9iq51iuzsz3pkss0n70sejg0zp15tccof1h59d7870sc6e72sqxuzq5b4gua2dcdavxho41hr3lfbkzkirkgxg51ao35rz6xn5b1ecrg5bt8oe4w7j1lqgzaat581snk64rrskgddtroz6n7m7hav7va48ct3vq5czvxfz6lew0qb1mgqtgkkb21nnb2091va9bqwru6k6kl5qnnigq43qs5gp3nkwoh4q1o3kxtxpeegr917mebvnnukw41x9kuj0wkt0rd9h9jj4ijif3b0jd9k45u3gzx4bp1h3skpgl3qx0035btgj4ecvtut83les4qes2f5y27uwztxeylw3dz812d7a06qyzqijgtds4j8ffup75vq9r4sul7ph75x8tclygd9u9q3jb4tdx3w76cvl7px9cerq56cqierhmyevv6yjgt5bfr383t69q8bwte4yx6e80f07hglpy5pvlu48btvxrr5llek2gi0hw5fbsey57hwjqyfxnwqv891mgltkyuwe4n0v8fks4x38sdhci6w9wgdjpbbwuq09u5gurvj6vpfemtrucr120m47y8up6bonz5qvo8ed2rf40x3alwfqtnukfl4bmtbvdwjq5tdmjz9u1sqhiwj6df7v19oeffnw6l7upej1ueem5gove59cy4rl0jfg5vujds6b5mocucibc6zao79fmxnqng8q83m2cb0u1lka46dg8avj2drabe6tkiq0jccigop209rnmg44d7m28r5gxunwls97orzyxjeylw79yx4tncobp2d1htzk3zipum8y0woq11mg5793hy7f2q175db4e0wn1i3l0y5qjexzv05mpxzkcgbsurdj1qfx26kf0xx9bru7kxf997vewbkhv2sogcbti3ql2k1w5y6wcc24y507gbn990loasdgnjnuq566gwtm95ihluizzqyi506no3qavhmow8lhc45bjzw4n7s7jrwr418qjcvah6zit8wsj8rp5ydnnt5t7s1tv4dw9iywk5ggv0wax8f5f2ilx7cdopaus2e8fxbv1qbbxi6zns3p832val7df9plnolkoq46pg3gba1gzvgk4cety7rra9d1dmokq1tjqy569vygmb1uvukxdgde56mjx1422ywd88hyqcv0bvxk7epwzg4i26veyvidc10nj7v96a5kcv1n2qcpaugyoezboca0luvmf3j8l7c1v5xy68wwonplkadyx88go26mp5utzuyerrh3xr5ot2owauybiobcc8zcxbstuto5ppg4szltyh3w683bnmj28x2s97229iryy5svavf52lumy6ah78apscdode2u088mzw55a6nywye 00:05:54.450 07:33:19 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:54.450 07:33:19 -- dd/basic_rw.sh@59 -- # gen_conf 00:05:54.450 07:33:19 -- dd/common.sh@31 -- # xtrace_disable 00:05:54.450 07:33:19 -- common/autotest_common.sh@10 -- # set +x 00:05:54.450 [2024-12-02 07:33:19.863736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.450 [2024-12-02 07:33:19.863981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57980 ] 00:05:54.450 { 00:05:54.450 "subsystems": [ 00:05:54.450 { 00:05:54.450 "subsystem": "bdev", 00:05:54.450 "config": [ 00:05:54.450 { 00:05:54.450 "params": { 00:05:54.450 "trtype": "pcie", 00:05:54.450 "traddr": "0000:00:06.0", 00:05:54.450 "name": "Nvme0" 00:05:54.450 }, 00:05:54.450 "method": "bdev_nvme_attach_controller" 00:05:54.450 }, 00:05:54.450 { 00:05:54.450 "method": "bdev_wait_for_examine" 00:05:54.450 } 00:05:54.450 ] 00:05:54.450 } 00:05:54.450 ] 00:05:54.450 } 00:05:54.450 [2024-12-02 07:33:19.999666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.450 [2024-12-02 07:33:20.053333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.709  [2024-12-02T07:33:20.333Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:54.709 00:05:54.709 07:33:20 -- dd/basic_rw.sh@65 -- # gen_conf 00:05:54.709 07:33:20 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:54.709 07:33:20 -- dd/common.sh@31 -- # xtrace_disable 00:05:54.709 07:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.969 [2024-12-02 07:33:20.393684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.969 [2024-12-02 07:33:20.393807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57987 ] 00:05:54.969 { 00:05:54.969 "subsystems": [ 00:05:54.969 { 00:05:54.969 "subsystem": "bdev", 00:05:54.969 "config": [ 00:05:54.969 { 00:05:54.969 "params": { 00:05:54.969 "trtype": "pcie", 00:05:54.969 "traddr": "0000:00:06.0", 00:05:54.969 "name": "Nvme0" 00:05:54.969 }, 00:05:54.969 "method": "bdev_nvme_attach_controller" 00:05:54.969 }, 00:05:54.969 { 00:05:54.969 "method": "bdev_wait_for_examine" 00:05:54.969 } 00:05:54.969 ] 00:05:54.969 } 00:05:54.969 ] 00:05:54.969 } 00:05:54.969 [2024-12-02 07:33:20.535091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.228 [2024-12-02 07:33:20.598024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.228  [2024-12-02T07:33:21.112Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:55.488 00:05:55.488 07:33:20 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:55.488 07:33:20 -- dd/basic_rw.sh@72 -- # [[ llgqpmvf3yrz31nve0lccc9ryasbktn1k1b7923mil2x14bvifujgfu3z19e8oofht0hcyem363u0lre3abb6fnnre8grjm0rqdiox4wm2htgbm3eswy1o7cqaqfk6vljnkexkm5s0gqrzx0yeta33cf19zkgcscz8uwrb5wzzryizmtaklpc7xn29ezt5xvf7pkzzd25xzlumpz8fd8w0ovk9qx8jfawyc9eav4dcvikpf0cta8r0tnuqr9s0yk57fgynvuwmy356orydr2bmq7o4uzs4ocv97c6ouj693pu15cn9kglffzyytmsqqvqec5hhlwcy0sbyixbv4dyllesn45x04whqae69gib79viusdjzi2trlb05vgj4cuhe31fvr013fs5f22asfvawjcphv6kivb8k89w7f9ox4tvs7ww0ylmfhck9x9161e4mxgeswxkgse09ex7pfkloixrcen3y54wpks0ic6lg7ggmwvuhqvbktdd0fbga9mq8a79xt3jez9zux2xi65wu691tauthpu142djheynyvahkhodiweef85lnqra0kki66l03f4bi9ws656wk6igjhn4b0xzv26sckzmha95bu7x2qjv9opvefb4yj3im24s7oa3mcq3pcsktjc7v9z8sd67l4qfrzkua2frqv6qrgsjepi46w308pq3ebeimc036lfq6hpvgsyl8sljnm4w3j9v3f2qk48ige0u39cvwcbvhwgj5fcq3c721j60ihyenn1xrikt989h3n3n2p8w8nuo35rs9mhty8hmi1z08lgs84bytgifdqpb1kav6zvd77q4h103rfcl4bgtkuxzhibhq4dituv2a4xj5wjz4maj6v4f3btx6k38tc94zevzt7waivnsmmf8gb0ee5xncw12m19xg8lrre5d9d017r0u9rvzfkwcqmh4g9e5mtihb4uuq3083gu6v4s6hobaz2njvgknla4ruqx4xe9jxgvryjxyjgbrqzurl6i5f0z3aytkckn76wa3a2lde1ni76yq5jkjpsl9zuuzaxe7o8hfaovjmuztl99pmbx4ocmbk94n8znijqdf5omkn5ngc5oed044eu8qicmaokmiri4b5z4246hlwpmzdmlhwfy0r9ynp96w49l51x226czz0eckiiv9te5kw83q6du1piadwoxs3l5cvg9w4py5g0gp734d8me6o2ec4iyc69svxwpsqjm8wrzpg099pqyvmy0bhtc0wl3joei8jpuwrbesfdhpmt1qn9zmrgiv0wfdkgc3nvs27eq8db98oa9ebpsz6skd0is5y11jthe6n8oqgbhoiyip0nq5kez5va7qivk71i3le2nfx82o3mtn6rmuk722wxil377eefq82govq83mi1y59uuk6x2bdb776kybpezzovqgo59rtmyvzkwhsadebglbravf4ktluq1skkfuvh9g9csf3vv6gpyy42tcz81571dq5u9xx8if30imkip7qt0wb0gyjlpsp8wjpqd976l0x4xggtrjbk365x5ad1mrhfdy09c2kj0phf36n0m60z4ihpsnj136c9m7sesc46s02p45si8sa9z7pd7j4gwdykig2vv68d9l4vj50hwud84avfjzych9h7u2vty8y9h7e47dr4yua2ql8g6hsk91huhrr00ze6jcedtixs3hl7bi875yid258ce4gayorppvwfni682ahufyvbpm74653p06ysl1bxdzlnmpevea66nx6y99ruelqmiiaznlyvhhbhfeywtuenaiagpeoawnt1aybrphgkwwspbfxy1p2rn5n1p6e9piiw4z3xyobx6vd4zhpasssa3cdipldm7irfjiflwiro37unbm9k655ywhaakvpk4k5ey7jpvirgnemkjitqhgmruiseoeolzzfa1bach6i5u53ayczd6rxpbwboygpo5qpc7vbi5tyu0r6ftto5ieyagipyjz1zl3bxkfz0qvf5kspbew308wf6tjrxj34n5qs1hjll86s2ccrjf220mi8npfvw01zcv8t4h68w6r5qlz26zwnc8t8g6xubuzx19jssvkwg2k87h3vzbfe0xpc949vzqckafhlter012x6vn4w13oyj1u27oft3bnt5g616uk60n2duq86qljw8fmcso0rj6d0ob98byqouobriezfexhbdf7a7gl6svg7t877a2yk9f4xbdmezll88jwyp97kfylbwee2ej31aii7gvyhxgq52uqxsb3sk0o4e7sec14lr42wxl19q00urlkkapce9ovrk8d3s86ax4uftbif3kvhynht75gqk3utrimvkvl7h7oc58vfilxaa19mi9mte6g8b5zv4257c0p5vvgf7f87ojgs71ks4nfddrmqutdwm7myzb0wxyxjskxxtipf7ue2w52zj1emb7pj037yjaefqj8yap2g1swvojglnr1a5el5vpi0mn7dimg86sudoesp2wvx1zsmofw6v1tijoy7bflz6l0zuqctxeevje0cdcz84301eeejhxkovxxr27g9mk2z9tbojsimr042oxdivr265p54h6lorfaqamcppzrzay2tt4v310o4fch9k85lq11102tqgrhgyymbskq46agsvrnqrl20xy2qg1g0k3vjrg7ndy6mbs4yg85n8qob591xr2j8j13ku9iq51iuzsz3pkss0n70sejg0zp15tccof1h59d7870sc6e72sqxuzq5b4gua2dcdavxho41hr3lfbkzkirkgxg51ao35rz6xn5b1ecrg5bt8oe4w7j1lqgzaat581snk64rrskgddtroz6n7m7hav7va48ct3vq5czvxfz6lew0qb1mgqtgkkb21nnb2091va9bqwru6k6kl5qnnigq43qs5gp3nkwoh4q1o3kxtxpeegr917mebvnnukw41x9kuj0wkt0rd9h9jj4ijif3b0jd9k45u3gzx4bp1h3skpgl3qx0035btgj4ecvtut83les4qes2f5y27uwztxeylw3dz812d7a06qyzqijgtds4j8ffup75vq9r4sul7ph75x8tclygd9u9q3jb4tdx3w76cvl7px9cerq56cqierhmyevv6yjgt5bfr383t69q8bwte4yx6e80f07hglpy5pvlu48btvxrr5llek2gi0hw5fbsey57hwjqyfxnwqv891mgltkyuwe4n0v8fks4x38sdhci6w9wgdjpbbwuq09u5gurvj6vpfemtrucr120m47y8up6bonz5qvo8ed2rf40x3alwfqtnukfl4bmtbvdwjq5tdmjz9u1sqhiwj6df7v19oeffnw6l7upej1ueem5gove59cy4rl0jfg5vujds6b5mocucibc6zao79fmxnqng8q83m2cb0u1lka46dg8avj2drabe6tkiq0jccigop209rnmg44d7m28r5gxunwls97orzyxjeylw79yx4tncobp2d1htzk3zipum8y0woq11mg5793hy7f2q175db4e0wn1i3l0y5qjexzv05mpxzkcgbsurdj1qfx26kf0xx9bru7kxf997vewbkhv2sogcbti3ql2k1w5y6wcc24y507gbn990loasdgnjnuq566gwtm95ihluizzqyi506no3qavhmow8lhc45bjzw4n7s7jrwr418qjcvah6zit8wsj8rp5ydnnt5t7s1tv4dw9iywk5ggv0wax8f5f2ilx7cdopaus2e8fxbv1qbbxi6zns3p832val7df9plnolkoq46pg3gba1gzvgk4cety7rra9d1dmokq1tjqy569vygmb1uvukxdgde56mjx1422ywd88hyqcv0bvxk7epwzg4i26veyvidc10nj7v96a5kcv1n2qcpaugyoezboca0luvmf3j8l7c1v5xy68wwonplkadyx88go26mp5utzuyerrh3xr5ot2owauybiobcc8zcxbstuto5ppg4szltyh3w683bnmj28x2s97229iryy5svavf52lumy6ah78apscdode2u088mzw55a6nywye == \l\l\g\q\p\m\v\f\3\y\r\z\3\1\n\v\e\0\l\c\c\c\9\r\y\a\s\b\k\t\n\1\k\1\b\7\9\2\3\m\i\l\2\x\1\4\b\v\i\f\u\j\g\f\u\3\z\1\9\e\8\o\o\f\h\t\0\h\c\y\e\m\3\6\3\u\0\l\r\e\3\a\b\b\6\f\n\n\r\e\8\g\r\j\m\0\r\q\d\i\o\x\4\w\m\2\h\t\g\b\m\3\e\s\w\y\1\o\7\c\q\a\q\f\k\6\v\l\j\n\k\e\x\k\m\5\s\0\g\q\r\z\x\0\y\e\t\a\3\3\c\f\1\9\z\k\g\c\s\c\z\8\u\w\r\b\5\w\z\z\r\y\i\z\m\t\a\k\l\p\c\7\x\n\2\9\e\z\t\5\x\v\f\7\p\k\z\z\d\2\5\x\z\l\u\m\p\z\8\f\d\8\w\0\o\v\k\9\q\x\8\j\f\a\w\y\c\9\e\a\v\4\d\c\v\i\k\p\f\0\c\t\a\8\r\0\t\n\u\q\r\9\s\0\y\k\5\7\f\g\y\n\v\u\w\m\y\3\5\6\o\r\y\d\r\2\b\m\q\7\o\4\u\z\s\4\o\c\v\9\7\c\6\o\u\j\6\9\3\p\u\1\5\c\n\9\k\g\l\f\f\z\y\y\t\m\s\q\q\v\q\e\c\5\h\h\l\w\c\y\0\s\b\y\i\x\b\v\4\d\y\l\l\e\s\n\4\5\x\0\4\w\h\q\a\e\6\9\g\i\b\7\9\v\i\u\s\d\j\z\i\2\t\r\l\b\0\5\v\g\j\4\c\u\h\e\3\1\f\v\r\0\1\3\f\s\5\f\2\2\a\s\f\v\a\w\j\c\p\h\v\6\k\i\v\b\8\k\8\9\w\7\f\9\o\x\4\t\v\s\7\w\w\0\y\l\m\f\h\c\k\9\x\9\1\6\1\e\4\m\x\g\e\s\w\x\k\g\s\e\0\9\e\x\7\p\f\k\l\o\i\x\r\c\e\n\3\y\5\4\w\p\k\s\0\i\c\6\l\g\7\g\g\m\w\v\u\h\q\v\b\k\t\d\d\0\f\b\g\a\9\m\q\8\a\7\9\x\t\3\j\e\z\9\z\u\x\2\x\i\6\5\w\u\6\9\1\t\a\u\t\h\p\u\1\4\2\d\j\h\e\y\n\y\v\a\h\k\h\o\d\i\w\e\e\f\8\5\l\n\q\r\a\0\k\k\i\6\6\l\0\3\f\4\b\i\9\w\s\6\5\6\w\k\6\i\g\j\h\n\4\b\0\x\z\v\2\6\s\c\k\z\m\h\a\9\5\b\u\7\x\2\q\j\v\9\o\p\v\e\f\b\4\y\j\3\i\m\2\4\s\7\o\a\3\m\c\q\3\p\c\s\k\t\j\c\7\v\9\z\8\s\d\6\7\l\4\q\f\r\z\k\u\a\2\f\r\q\v\6\q\r\g\s\j\e\p\i\4\6\w\3\0\8\p\q\3\e\b\e\i\m\c\0\3\6\l\f\q\6\h\p\v\g\s\y\l\8\s\l\j\n\m\4\w\3\j\9\v\3\f\2\q\k\4\8\i\g\e\0\u\3\9\c\v\w\c\b\v\h\w\g\j\5\f\c\q\3\c\7\2\1\j\6\0\i\h\y\e\n\n\1\x\r\i\k\t\9\8\9\h\3\n\3\n\2\p\8\w\8\n\u\o\3\5\r\s\9\m\h\t\y\8\h\m\i\1\z\0\8\l\g\s\8\4\b\y\t\g\i\f\d\q\p\b\1\k\a\v\6\z\v\d\7\7\q\4\h\1\0\3\r\f\c\l\4\b\g\t\k\u\x\z\h\i\b\h\q\4\d\i\t\u\v\2\a\4\x\j\5\w\j\z\4\m\a\j\6\v\4\f\3\b\t\x\6\k\3\8\t\c\9\4\z\e\v\z\t\7\w\a\i\v\n\s\m\m\f\8\g\b\0\e\e\5\x\n\c\w\1\2\m\1\9\x\g\8\l\r\r\e\5\d\9\d\0\1\7\r\0\u\9\r\v\z\f\k\w\c\q\m\h\4\g\9\e\5\m\t\i\h\b\4\u\u\q\3\0\8\3\g\u\6\v\4\s\6\h\o\b\a\z\2\n\j\v\g\k\n\l\a\4\r\u\q\x\4\x\e\9\j\x\g\v\r\y\j\x\y\j\g\b\r\q\z\u\r\l\6\i\5\f\0\z\3\a\y\t\k\c\k\n\7\6\w\a\3\a\2\l\d\e\1\n\i\7\6\y\q\5\j\k\j\p\s\l\9\z\u\u\z\a\x\e\7\o\8\h\f\a\o\v\j\m\u\z\t\l\9\9\p\m\b\x\4\o\c\m\b\k\9\4\n\8\z\n\i\j\q\d\f\5\o\m\k\n\5\n\g\c\5\o\e\d\0\4\4\e\u\8\q\i\c\m\a\o\k\m\i\r\i\4\b\5\z\4\2\4\6\h\l\w\p\m\z\d\m\l\h\w\f\y\0\r\9\y\n\p\9\6\w\4\9\l\5\1\x\2\2\6\c\z\z\0\e\c\k\i\i\v\9\t\e\5\k\w\8\3\q\6\d\u\1\p\i\a\d\w\o\x\s\3\l\5\c\v\g\9\w\4\p\y\5\g\0\g\p\7\3\4\d\8\m\e\6\o\2\e\c\4\i\y\c\6\9\s\v\x\w\p\s\q\j\m\8\w\r\z\p\g\0\9\9\p\q\y\v\m\y\0\b\h\t\c\0\w\l\3\j\o\e\i\8\j\p\u\w\r\b\e\s\f\d\h\p\m\t\1\q\n\9\z\m\r\g\i\v\0\w\f\d\k\g\c\3\n\v\s\2\7\e\q\8\d\b\9\8\o\a\9\e\b\p\s\z\6\s\k\d\0\i\s\5\y\1\1\j\t\h\e\6\n\8\o\q\g\b\h\o\i\y\i\p\0\n\q\5\k\e\z\5\v\a\7\q\i\v\k\7\1\i\3\l\e\2\n\f\x\8\2\o\3\m\t\n\6\r\m\u\k\7\2\2\w\x\i\l\3\7\7\e\e\f\q\8\2\g\o\v\q\8\3\m\i\1\y\5\9\u\u\k\6\x\2\b\d\b\7\7\6\k\y\b\p\e\z\z\o\v\q\g\o\5\9\r\t\m\y\v\z\k\w\h\s\a\d\e\b\g\l\b\r\a\v\f\4\k\t\l\u\q\1\s\k\k\f\u\v\h\9\g\9\c\s\f\3\v\v\6\g\p\y\y\4\2\t\c\z\8\1\5\7\1\d\q\5\u\9\x\x\8\i\f\3\0\i\m\k\i\p\7\q\t\0\w\b\0\g\y\j\l\p\s\p\8\w\j\p\q\d\9\7\6\l\0\x\4\x\g\g\t\r\j\b\k\3\6\5\x\5\a\d\1\m\r\h\f\d\y\0\9\c\2\k\j\0\p\h\f\3\6\n\0\m\6\0\z\4\i\h\p\s\n\j\1\3\6\c\9\m\7\s\e\s\c\4\6\s\0\2\p\4\5\s\i\8\s\a\9\z\7\p\d\7\j\4\g\w\d\y\k\i\g\2\v\v\6\8\d\9\l\4\v\j\5\0\h\w\u\d\8\4\a\v\f\j\z\y\c\h\9\h\7\u\2\v\t\y\8\y\9\h\7\e\4\7\d\r\4\y\u\a\2\q\l\8\g\6\h\s\k\9\1\h\u\h\r\r\0\0\z\e\6\j\c\e\d\t\i\x\s\3\h\l\7\b\i\8\7\5\y\i\d\2\5\8\c\e\4\g\a\y\o\r\p\p\v\w\f\n\i\6\8\2\a\h\u\f\y\v\b\p\m\7\4\6\5\3\p\0\6\y\s\l\1\b\x\d\z\l\n\m\p\e\v\e\a\6\6\n\x\6\y\9\9\r\u\e\l\q\m\i\i\a\z\n\l\y\v\h\h\b\h\f\e\y\w\t\u\e\n\a\i\a\g\p\e\o\a\w\n\t\1\a\y\b\r\p\h\g\k\w\w\s\p\b\f\x\y\1\p\2\r\n\5\n\1\p\6\e\9\p\i\i\w\4\z\3\x\y\o\b\x\6\v\d\4\z\h\p\a\s\s\s\a\3\c\d\i\p\l\d\m\7\i\r\f\j\i\f\l\w\i\r\o\3\7\u\n\b\m\9\k\6\5\5\y\w\h\a\a\k\v\p\k\4\k\5\e\y\7\j\p\v\i\r\g\n\e\m\k\j\i\t\q\h\g\m\r\u\i\s\e\o\e\o\l\z\z\f\a\1\b\a\c\h\6\i\5\u\5\3\a\y\c\z\d\6\r\x\p\b\w\b\o\y\g\p\o\5\q\p\c\7\v\b\i\5\t\y\u\0\r\6\f\t\t\o\5\i\e\y\a\g\i\p\y\j\z\1\z\l\3\b\x\k\f\z\0\q\v\f\5\k\s\p\b\e\w\3\0\8\w\f\6\t\j\r\x\j\3\4\n\5\q\s\1\h\j\l\l\8\6\s\2\c\c\r\j\f\2\2\0\m\i\8\n\p\f\v\w\0\1\z\c\v\8\t\4\h\6\8\w\6\r\5\q\l\z\2\6\z\w\n\c\8\t\8\g\6\x\u\b\u\z\x\1\9\j\s\s\v\k\w\g\2\k\8\7\h\3\v\z\b\f\e\0\x\p\c\9\4\9\v\z\q\c\k\a\f\h\l\t\e\r\0\1\2\x\6\v\n\4\w\1\3\o\y\j\1\u\2\7\o\f\t\3\b\n\t\5\g\6\1\6\u\k\6\0\n\2\d\u\q\8\6\q\l\j\w\8\f\m\c\s\o\0\r\j\6\d\0\o\b\9\8\b\y\q\o\u\o\b\r\i\e\z\f\e\x\h\b\d\f\7\a\7\g\l\6\s\v\g\7\t\8\7\7\a\2\y\k\9\f\4\x\b\d\m\e\z\l\l\8\8\j\w\y\p\9\7\k\f\y\l\b\w\e\e\2\e\j\3\1\a\i\i\7\g\v\y\h\x\g\q\5\2\u\q\x\s\b\3\s\k\0\o\4\e\7\s\e\c\1\4\l\r\4\2\w\x\l\1\9\q\0\0\u\r\l\k\k\a\p\c\e\9\o\v\r\k\8\d\3\s\8\6\a\x\4\u\f\t\b\i\f\3\k\v\h\y\n\h\t\7\5\g\q\k\3\u\t\r\i\m\v\k\v\l\7\h\7\o\c\5\8\v\f\i\l\x\a\a\1\9\m\i\9\m\t\e\6\g\8\b\5\z\v\4\2\5\7\c\0\p\5\v\v\g\f\7\f\8\7\o\j\g\s\7\1\k\s\4\n\f\d\d\r\m\q\u\t\d\w\m\7\m\y\z\b\0\w\x\y\x\j\s\k\x\x\t\i\p\f\7\u\e\2\w\5\2\z\j\1\e\m\b\7\p\j\0\3\7\y\j\a\e\f\q\j\8\y\a\p\2\g\1\s\w\v\o\j\g\l\n\r\1\a\5\e\l\5\v\p\i\0\m\n\7\d\i\m\g\8\6\s\u\d\o\e\s\p\2\w\v\x\1\z\s\m\o\f\w\6\v\1\t\i\j\o\y\7\b\f\l\z\6\l\0\z\u\q\c\t\x\e\e\v\j\e\0\c\d\c\z\8\4\3\0\1\e\e\e\j\h\x\k\o\v\x\x\r\2\7\g\9\m\k\2\z\9\t\b\o\j\s\i\m\r\0\4\2\o\x\d\i\v\r\2\6\5\p\5\4\h\6\l\o\r\f\a\q\a\m\c\p\p\z\r\z\a\y\2\t\t\4\v\3\1\0\o\4\f\c\h\9\k\8\5\l\q\1\1\1\0\2\t\q\g\r\h\g\y\y\m\b\s\k\q\4\6\a\g\s\v\r\n\q\r\l\2\0\x\y\2\q\g\1\g\0\k\3\v\j\r\g\7\n\d\y\6\m\b\s\4\y\g\8\5\n\8\q\o\b\5\9\1\x\r\2\j\8\j\1\3\k\u\9\i\q\5\1\i\u\z\s\z\3\p\k\s\s\0\n\7\0\s\e\j\g\0\z\p\1\5\t\c\c\o\f\1\h\5\9\d\7\8\7\0\s\c\6\e\7\2\s\q\x\u\z\q\5\b\4\g\u\a\2\d\c\d\a\v\x\h\o\4\1\h\r\3\l\f\b\k\z\k\i\r\k\g\x\g\5\1\a\o\3\5\r\z\6\x\n\5\b\1\e\c\r\g\5\b\t\8\o\e\4\w\7\j\1\l\q\g\z\a\a\t\5\8\1\s\n\k\6\4\r\r\s\k\g\d\d\t\r\o\z\6\n\7\m\7\h\a\v\7\v\a\4\8\c\t\3\v\q\5\c\z\v\x\f\z\6\l\e\w\0\q\b\1\m\g\q\t\g\k\k\b\2\1\n\n\b\2\0\9\1\v\a\9\b\q\w\r\u\6\k\6\k\l\5\q\n\n\i\g\q\4\3\q\s\5\g\p\3\n\k\w\o\h\4\q\1\o\3\k\x\t\x\p\e\e\g\r\9\1\7\m\e\b\v\n\n\u\k\w\4\1\x\9\k\u\j\0\w\k\t\0\r\d\9\h\9\j\j\4\i\j\i\f\3\b\0\j\d\9\k\4\5\u\3\g\z\x\4\b\p\1\h\3\s\k\p\g\l\3\q\x\0\0\3\5\b\t\g\j\4\e\c\v\t\u\t\8\3\l\e\s\4\q\e\s\2\f\5\y\2\7\u\w\z\t\x\e\y\l\w\3\d\z\8\1\2\d\7\a\0\6\q\y\z\q\i\j\g\t\d\s\4\j\8\f\f\u\p\7\5\v\q\9\r\4\s\u\l\7\p\h\7\5\x\8\t\c\l\y\g\d\9\u\9\q\3\j\b\4\t\d\x\3\w\7\6\c\v\l\7\p\x\9\c\e\r\q\5\6\c\q\i\e\r\h\m\y\e\v\v\6\y\j\g\t\5\b\f\r\3\8\3\t\6\9\q\8\b\w\t\e\4\y\x\6\e\8\0\f\0\7\h\g\l\p\y\5\p\v\l\u\4\8\b\t\v\x\r\r\5\l\l\e\k\2\g\i\0\h\w\5\f\b\s\e\y\5\7\h\w\j\q\y\f\x\n\w\q\v\8\9\1\m\g\l\t\k\y\u\w\e\4\n\0\v\8\f\k\s\4\x\3\8\s\d\h\c\i\6\w\9\w\g\d\j\p\b\b\w\u\q\0\9\u\5\g\u\r\v\j\6\v\p\f\e\m\t\r\u\c\r\1\2\0\m\4\7\y\8\u\p\6\b\o\n\z\5\q\v\o\8\e\d\2\r\f\4\0\x\3\a\l\w\f\q\t\n\u\k\f\l\4\b\m\t\b\v\d\w\j\q\5\t\d\m\j\z\9\u\1\s\q\h\i\w\j\6\d\f\7\v\1\9\o\e\f\f\n\w\6\l\7\u\p\e\j\1\u\e\e\m\5\g\o\v\e\5\9\c\y\4\r\l\0\j\f\g\5\v\u\j\d\s\6\b\5\m\o\c\u\c\i\b\c\6\z\a\o\7\9\f\m\x\n\q\n\g\8\q\8\3\m\2\c\b\0\u\1\l\k\a\4\6\d\g\8\a\v\j\2\d\r\a\b\e\6\t\k\i\q\0\j\c\c\i\g\o\p\2\0\9\r\n\m\g\4\4\d\7\m\2\8\r\5\g\x\u\n\w\l\s\9\7\o\r\z\y\x\j\e\y\l\w\7\9\y\x\4\t\n\c\o\b\p\2\d\1\h\t\z\k\3\z\i\p\u\m\8\y\0\w\o\q\1\1\m\g\5\7\9\3\h\y\7\f\2\q\1\7\5\d\b\4\e\0\w\n\1\i\3\l\0\y\5\q\j\e\x\z\v\0\5\m\p\x\z\k\c\g\b\s\u\r\d\j\1\q\f\x\2\6\k\f\0\x\x\9\b\r\u\7\k\x\f\9\9\7\v\e\w\b\k\h\v\2\s\o\g\c\b\t\i\3\q\l\2\k\1\w\5\y\6\w\c\c\2\4\y\5\0\7\g\b\n\9\9\0\l\o\a\s\d\g\n\j\n\u\q\5\6\6\g\w\t\m\9\5\i\h\l\u\i\z\z\q\y\i\5\0\6\n\o\3\q\a\v\h\m\o\w\8\l\h\c\4\5\b\j\z\w\4\n\7\s\7\j\r\w\r\4\1\8\q\j\c\v\a\h\6\z\i\t\8\w\s\j\8\r\p\5\y\d\n\n\t\5\t\7\s\1\t\v\4\d\w\9\i\y\w\k\5\g\g\v\0\w\a\x\8\f\5\f\2\i\l\x\7\c\d\o\p\a\u\s\2\e\8\f\x\b\v\1\q\b\b\x\i\6\z\n\s\3\p\8\3\2\v\a\l\7\d\f\9\p\l\n\o\l\k\o\q\4\6\p\g\3\g\b\a\1\g\z\v\g\k\4\c\e\t\y\7\r\r\a\9\d\1\d\m\o\k\q\1\t\j\q\y\5\6\9\v\y\g\m\b\1\u\v\u\k\x\d\g\d\e\5\6\m\j\x\1\4\2\2\y\w\d\8\8\h\y\q\c\v\0\b\v\x\k\7\e\p\w\z\g\4\i\2\6\v\e\y\v\i\d\c\1\0\n\j\7\v\9\6\a\5\k\c\v\1\n\2\q\c\p\a\u\g\y\o\e\z\b\o\c\a\0\l\u\v\m\f\3\j\8\l\7\c\1\v\5\x\y\6\8\w\w\o\n\p\l\k\a\d\y\x\8\8\g\o\2\6\m\p\5\u\t\z\u\y\e\r\r\h\3\x\r\5\o\t\2\o\w\a\u\y\b\i\o\b\c\c\8\z\c\x\b\s\t\u\t\o\5\p\p\g\4\s\z\l\t\y\h\3\w\6\8\3\b\n\m\j\2\8\x\2\s\9\7\2\2\9\i\r\y\y\5\s\v\a\v\f\5\2\l\u\m\y\6\a\h\7\8\a\p\s\c\d\o\d\e\2\u\0\8\8\m\z\w\5\5\a\6\n\y\w\y\e ]] 00:05:55.488 00:05:55.488 real 0m1.113s 00:05:55.488 user 0m0.796s 00:05:55.488 sys 0m0.221s 00:05:55.488 ************************************ 00:05:55.488 END TEST dd_rw_offset 00:05:55.488 ************************************ 00:05:55.488 07:33:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.488 07:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.488 07:33:20 -- dd/basic_rw.sh@1 -- # cleanup 00:05:55.488 07:33:20 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:55.488 07:33:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:55.489 07:33:20 -- dd/common.sh@11 -- # local nvme_ref= 00:05:55.489 07:33:20 -- dd/common.sh@12 -- # local size=0xffff 00:05:55.489 07:33:20 -- dd/common.sh@14 -- # local bs=1048576 00:05:55.489 07:33:20 -- dd/common.sh@15 -- # local count=1 00:05:55.489 07:33:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:55.489 07:33:20 -- dd/common.sh@18 -- # gen_conf 00:05:55.489 07:33:20 -- dd/common.sh@31 -- # xtrace_disable 00:05:55.489 07:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.489 [2024-12-02 07:33:20.968624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.489 [2024-12-02 07:33:20.968717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58020 ] 00:05:55.489 { 00:05:55.489 "subsystems": [ 00:05:55.489 { 00:05:55.489 "subsystem": "bdev", 00:05:55.489 "config": [ 00:05:55.489 { 00:05:55.489 "params": { 00:05:55.489 "trtype": "pcie", 00:05:55.489 "traddr": "0000:00:06.0", 00:05:55.489 "name": "Nvme0" 00:05:55.489 }, 00:05:55.489 "method": "bdev_nvme_attach_controller" 00:05:55.489 }, 00:05:55.489 { 00:05:55.489 "method": "bdev_wait_for_examine" 00:05:55.489 } 00:05:55.489 ] 00:05:55.489 } 00:05:55.489 ] 00:05:55.489 } 00:05:55.489 [2024-12-02 07:33:21.104819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.748 [2024-12-02 07:33:21.156899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.748  [2024-12-02T07:33:21.632Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:56.008 00:05:56.008 07:33:21 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.008 00:05:56.008 real 0m14.976s 00:05:56.008 user 0m10.915s 00:05:56.008 sys 0m2.693s 00:05:56.008 07:33:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.008 07:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.008 ************************************ 00:05:56.008 END TEST spdk_dd_basic_rw 00:05:56.008 ************************************ 00:05:56.008 07:33:21 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:56.008 07:33:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.008 07:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.008 07:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.008 ************************************ 00:05:56.008 START TEST spdk_dd_posix 00:05:56.008 ************************************ 00:05:56.008 07:33:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:56.008 * Looking for test storage... 00:05:56.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:56.008 07:33:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.008 07:33:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.008 07:33:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.268 07:33:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.268 07:33:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.268 07:33:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.268 07:33:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.268 07:33:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.268 07:33:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.268 07:33:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.268 07:33:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.268 07:33:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.268 07:33:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.268 07:33:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.268 07:33:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.268 07:33:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.268 07:33:21 -- scripts/common.sh@344 -- # : 1 00:05:56.268 07:33:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.268 07:33:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.268 07:33:21 -- scripts/common.sh@364 -- # decimal 1 00:05:56.268 07:33:21 -- scripts/common.sh@352 -- # local d=1 00:05:56.268 07:33:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.268 07:33:21 -- scripts/common.sh@354 -- # echo 1 00:05:56.268 07:33:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.268 07:33:21 -- scripts/common.sh@365 -- # decimal 2 00:05:56.268 07:33:21 -- scripts/common.sh@352 -- # local d=2 00:05:56.268 07:33:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.268 07:33:21 -- scripts/common.sh@354 -- # echo 2 00:05:56.268 07:33:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.268 07:33:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.268 07:33:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.268 07:33:21 -- scripts/common.sh@367 -- # return 0 00:05:56.268 07:33:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.268 07:33:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.268 --rc genhtml_branch_coverage=1 00:05:56.268 --rc genhtml_function_coverage=1 00:05:56.268 --rc genhtml_legend=1 00:05:56.268 --rc geninfo_all_blocks=1 00:05:56.268 --rc geninfo_unexecuted_blocks=1 00:05:56.268 00:05:56.268 ' 00:05:56.268 07:33:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.268 --rc genhtml_branch_coverage=1 00:05:56.268 --rc genhtml_function_coverage=1 00:05:56.268 --rc genhtml_legend=1 00:05:56.268 --rc geninfo_all_blocks=1 00:05:56.268 --rc geninfo_unexecuted_blocks=1 00:05:56.268 00:05:56.268 ' 00:05:56.268 07:33:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.268 --rc genhtml_branch_coverage=1 00:05:56.268 --rc genhtml_function_coverage=1 00:05:56.268 --rc genhtml_legend=1 00:05:56.268 --rc geninfo_all_blocks=1 00:05:56.268 --rc geninfo_unexecuted_blocks=1 00:05:56.268 00:05:56.268 ' 00:05:56.268 07:33:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.268 --rc genhtml_branch_coverage=1 00:05:56.268 --rc genhtml_function_coverage=1 00:05:56.268 --rc genhtml_legend=1 00:05:56.268 --rc geninfo_all_blocks=1 00:05:56.268 --rc geninfo_unexecuted_blocks=1 00:05:56.268 00:05:56.268 ' 00:05:56.268 07:33:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.268 07:33:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.268 07:33:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.268 07:33:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.268 07:33:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.268 07:33:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.268 07:33:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.268 07:33:21 -- paths/export.sh@5 -- # export PATH 00:05:56.268 07:33:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.268 07:33:21 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:56.268 07:33:21 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:56.268 07:33:21 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:56.268 07:33:21 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:56.268 07:33:21 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.268 07:33:21 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.268 07:33:21 -- dd/posix.sh@130 -- # tests 00:05:56.268 07:33:21 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:56.268 * First test run, liburing in use 00:05:56.268 07:33:21 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:56.268 07:33:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.268 07:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.268 07:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 ************************************ 00:05:56.268 START TEST dd_flag_append 00:05:56.268 ************************************ 00:05:56.268 07:33:21 -- common/autotest_common.sh@1114 -- # append 00:05:56.268 07:33:21 -- dd/posix.sh@16 -- # local dump0 00:05:56.268 07:33:21 -- dd/posix.sh@17 -- # local dump1 00:05:56.268 07:33:21 -- dd/posix.sh@19 -- # gen_bytes 32 00:05:56.268 07:33:21 -- dd/common.sh@98 -- # xtrace_disable 00:05:56.268 07:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 07:33:21 -- dd/posix.sh@19 -- # dump0=8orrvmr262tuoyecpyk7fm32w03e8gps 00:05:56.268 07:33:21 -- dd/posix.sh@20 -- # gen_bytes 32 00:05:56.268 07:33:21 -- dd/common.sh@98 -- # xtrace_disable 00:05:56.268 07:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 07:33:21 -- dd/posix.sh@20 -- # dump1=h6ilogh3c8nsdjrxg0m9kolgae7w7d05 00:05:56.268 07:33:21 -- dd/posix.sh@22 -- # printf %s 8orrvmr262tuoyecpyk7fm32w03e8gps 00:05:56.268 07:33:21 -- dd/posix.sh@23 -- # printf %s h6ilogh3c8nsdjrxg0m9kolgae7w7d05 00:05:56.268 07:33:21 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:56.268 [2024-12-02 07:33:21.727249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.268 [2024-12-02 07:33:21.727364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58091 ] 00:05:56.268 [2024-12-02 07:33:21.860900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.526 [2024-12-02 07:33:21.913186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.526  [2024-12-02T07:33:22.150Z] Copying: 32/32 [B] (average 31 kBps) 00:05:56.526 00:05:56.526 07:33:22 -- dd/posix.sh@27 -- # [[ h6ilogh3c8nsdjrxg0m9kolgae7w7d058orrvmr262tuoyecpyk7fm32w03e8gps == \h\6\i\l\o\g\h\3\c\8\n\s\d\j\r\x\g\0\m\9\k\o\l\g\a\e\7\w\7\d\0\5\8\o\r\r\v\m\r\2\6\2\t\u\o\y\e\c\p\y\k\7\f\m\3\2\w\0\3\e\8\g\p\s ]] 00:05:56.526 00:05:56.526 real 0m0.442s 00:05:56.526 user 0m0.231s 00:05:56.526 sys 0m0.093s 00:05:56.526 07:33:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.526 07:33:22 -- common/autotest_common.sh@10 -- # set +x 00:05:56.526 ************************************ 00:05:56.526 END TEST dd_flag_append 00:05:56.526 ************************************ 00:05:56.785 07:33:22 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:56.785 07:33:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.785 07:33:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.785 07:33:22 -- common/autotest_common.sh@10 -- # set +x 00:05:56.785 ************************************ 00:05:56.785 START TEST dd_flag_directory 00:05:56.785 ************************************ 00:05:56.785 07:33:22 -- common/autotest_common.sh@1114 -- # directory 00:05:56.785 07:33:22 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.785 07:33:22 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.785 07:33:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.785 07:33:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.785 07:33:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.785 07:33:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.785 07:33:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.785 07:33:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.785 07:33:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.785 07:33:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.785 07:33:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:56.785 07:33:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.785 [2024-12-02 07:33:22.217775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.785 [2024-12-02 07:33:22.217870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58112 ] 00:05:56.785 [2024-12-02 07:33:22.349521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.785 [2024-12-02 07:33:22.403051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.045 [2024-12-02 07:33:22.447767] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.045 [2024-12-02 07:33:22.447821] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.045 [2024-12-02 07:33:22.447848] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.045 [2024-12-02 07:33:22.501922] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:05:57.045 07:33:22 -- common/autotest_common.sh@653 -- # es=236 00:05:57.045 07:33:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.045 07:33:22 -- common/autotest_common.sh@662 -- # es=108 00:05:57.045 07:33:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:57.045 07:33:22 -- common/autotest_common.sh@670 -- # es=1 00:05:57.045 07:33:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.045 07:33:22 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:57.045 07:33:22 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.045 07:33:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:57.045 07:33:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.045 07:33:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.045 07:33:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.045 07:33:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.045 07:33:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.045 07:33:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.045 07:33:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.045 07:33:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:57.045 07:33:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:57.045 [2024-12-02 07:33:22.642463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.045 [2024-12-02 07:33:22.642554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58116 ] 00:05:57.304 [2024-12-02 07:33:22.778940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.304 [2024-12-02 07:33:22.826150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.304 [2024-12-02 07:33:22.866637] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.304 [2024-12-02 07:33:22.866698] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:57.304 [2024-12-02 07:33:22.866710] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.304 [2024-12-02 07:33:22.921346] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:05:57.563 07:33:23 -- common/autotest_common.sh@653 -- # es=236 00:05:57.563 07:33:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.563 07:33:23 -- common/autotest_common.sh@662 -- # es=108 00:05:57.563 07:33:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:57.563 07:33:23 -- common/autotest_common.sh@670 -- # es=1 00:05:57.563 07:33:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.563 00:05:57.563 real 0m0.844s 00:05:57.563 user 0m0.455s 00:05:57.563 sys 0m0.181s 00:05:57.563 07:33:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.563 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:57.563 ************************************ 00:05:57.563 END TEST dd_flag_directory 00:05:57.563 ************************************ 00:05:57.563 07:33:23 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:57.563 07:33:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.563 07:33:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.563 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:57.563 ************************************ 00:05:57.563 START TEST dd_flag_nofollow 00:05:57.563 ************************************ 00:05:57.563 07:33:23 -- common/autotest_common.sh@1114 -- # nofollow 00:05:57.563 07:33:23 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:57.563 07:33:23 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:57.563 07:33:23 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:57.563 07:33:23 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:57.563 07:33:23 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.563 07:33:23 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.563 07:33:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.563 07:33:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.563 07:33:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.563 07:33:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.563 07:33:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.563 07:33:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.563 07:33:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.564 07:33:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.564 07:33:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:57.564 07:33:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.564 [2024-12-02 07:33:23.112799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.564 [2024-12-02 07:33:23.112864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:05:57.823 [2024-12-02 07:33:23.243276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.823 [2024-12-02 07:33:23.293193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.823 [2024-12-02 07:33:23.335558] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:57.823 [2024-12-02 07:33:23.335627] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:57.823 [2024-12-02 07:33:23.335654] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.823 [2024-12-02 07:33:23.389566] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:05:58.082 07:33:23 -- common/autotest_common.sh@653 -- # es=216 00:05:58.082 07:33:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.082 07:33:23 -- common/autotest_common.sh@662 -- # es=88 00:05:58.082 07:33:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:58.082 07:33:23 -- common/autotest_common.sh@670 -- # es=1 00:05:58.082 07:33:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.082 07:33:23 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:58.082 07:33:23 -- common/autotest_common.sh@650 -- # local es=0 00:05:58.082 07:33:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:58.082 07:33:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.082 07:33:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.082 07:33:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.082 07:33:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.082 07:33:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.082 07:33:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.082 07:33:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:58.082 07:33:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:58.082 07:33:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:58.082 [2024-12-02 07:33:23.532029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.082 [2024-12-02 07:33:23.532125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58154 ] 00:05:58.082 [2024-12-02 07:33:23.667807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.342 [2024-12-02 07:33:23.727660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.342 [2024-12-02 07:33:23.773193] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:58.342 [2024-12-02 07:33:23.773243] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:58.342 [2024-12-02 07:33:23.773270] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.342 [2024-12-02 07:33:23.827321] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:05:58.342 07:33:23 -- common/autotest_common.sh@653 -- # es=216 00:05:58.342 07:33:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.342 07:33:23 -- common/autotest_common.sh@662 -- # es=88 00:05:58.342 07:33:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:58.342 07:33:23 -- common/autotest_common.sh@670 -- # es=1 00:05:58.342 07:33:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.342 07:33:23 -- dd/posix.sh@46 -- # gen_bytes 512 00:05:58.342 07:33:23 -- dd/common.sh@98 -- # xtrace_disable 00:05:58.342 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:58.342 07:33:23 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.342 [2024-12-02 07:33:23.957916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.342 [2024-12-02 07:33:23.957996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58167 ] 00:05:58.601 [2024-12-02 07:33:24.087920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.601 [2024-12-02 07:33:24.134100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.601  [2024-12-02T07:33:24.485Z] Copying: 512/512 [B] (average 500 kBps) 00:05:58.861 00:05:58.861 07:33:24 -- dd/posix.sh@49 -- # [[ jaezgjo29mb1oma8ozfya1qi7a0la8gaugln7tee0pvhys796qz7zm328jbdww5nh2i8xbsl6bjppd3kl6dvhnnfmcnrae83a90377vs1tt27h9f0ct7nuvavdygn2rg13opecas8xar5wdfkvg0gr9hf8qfq5ueag8yf7i4ztpib9u9st7ozzga4cm3ac43q2pk759zwgmwnua11ut5hq8tzdy8slvhanaqfqv50b21f5jt0jk8znv254usnm4x77x8mj22y4jgo88wror5tmltyoxvrfx58nepy6ptl25g99cpfdtrdpbfx1waluyiux3t2jca4s08ozcmx0u7wxip7nhcenz9oevo4escmxocylkqw4v6jramruhtzn8zn96dr0z5s987hsqmwlwf94vr3fm1tfxp9wegvs6v0lvl27h7tvuvszlshlhxfs6lt951gb946m7rgko1ej9dbehstfy3kj9ru0f0qf62mpukabefwrnil2u7nhq0v4rd == \j\a\e\z\g\j\o\2\9\m\b\1\o\m\a\8\o\z\f\y\a\1\q\i\7\a\0\l\a\8\g\a\u\g\l\n\7\t\e\e\0\p\v\h\y\s\7\9\6\q\z\7\z\m\3\2\8\j\b\d\w\w\5\n\h\2\i\8\x\b\s\l\6\b\j\p\p\d\3\k\l\6\d\v\h\n\n\f\m\c\n\r\a\e\8\3\a\9\0\3\7\7\v\s\1\t\t\2\7\h\9\f\0\c\t\7\n\u\v\a\v\d\y\g\n\2\r\g\1\3\o\p\e\c\a\s\8\x\a\r\5\w\d\f\k\v\g\0\g\r\9\h\f\8\q\f\q\5\u\e\a\g\8\y\f\7\i\4\z\t\p\i\b\9\u\9\s\t\7\o\z\z\g\a\4\c\m\3\a\c\4\3\q\2\p\k\7\5\9\z\w\g\m\w\n\u\a\1\1\u\t\5\h\q\8\t\z\d\y\8\s\l\v\h\a\n\a\q\f\q\v\5\0\b\2\1\f\5\j\t\0\j\k\8\z\n\v\2\5\4\u\s\n\m\4\x\7\7\x\8\m\j\2\2\y\4\j\g\o\8\8\w\r\o\r\5\t\m\l\t\y\o\x\v\r\f\x\5\8\n\e\p\y\6\p\t\l\2\5\g\9\9\c\p\f\d\t\r\d\p\b\f\x\1\w\a\l\u\y\i\u\x\3\t\2\j\c\a\4\s\0\8\o\z\c\m\x\0\u\7\w\x\i\p\7\n\h\c\e\n\z\9\o\e\v\o\4\e\s\c\m\x\o\c\y\l\k\q\w\4\v\6\j\r\a\m\r\u\h\t\z\n\8\z\n\9\6\d\r\0\z\5\s\9\8\7\h\s\q\m\w\l\w\f\9\4\v\r\3\f\m\1\t\f\x\p\9\w\e\g\v\s\6\v\0\l\v\l\2\7\h\7\t\v\u\v\s\z\l\s\h\l\h\x\f\s\6\l\t\9\5\1\g\b\9\4\6\m\7\r\g\k\o\1\e\j\9\d\b\e\h\s\t\f\y\3\k\j\9\r\u\0\f\0\q\f\6\2\m\p\u\k\a\b\e\f\w\r\n\i\l\2\u\7\n\h\q\0\v\4\r\d ]] 00:05:58.861 00:05:58.861 real 0m1.280s 00:05:58.861 user 0m0.692s 00:05:58.861 sys 0m0.261s 00:05:58.861 07:33:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.861 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:58.861 ************************************ 00:05:58.861 END TEST dd_flag_nofollow 00:05:58.861 ************************************ 00:05:58.861 07:33:24 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:58.861 07:33:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.861 07:33:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.861 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:58.861 ************************************ 00:05:58.861 START TEST dd_flag_noatime 00:05:58.861 ************************************ 00:05:58.861 07:33:24 -- common/autotest_common.sh@1114 -- # noatime 00:05:58.861 07:33:24 -- dd/posix.sh@53 -- # local atime_if 00:05:58.861 07:33:24 -- dd/posix.sh@54 -- # local atime_of 00:05:58.861 07:33:24 -- dd/posix.sh@58 -- # gen_bytes 512 00:05:58.861 07:33:24 -- dd/common.sh@98 -- # xtrace_disable 00:05:58.861 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:58.861 07:33:24 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.861 07:33:24 -- dd/posix.sh@60 -- # atime_if=1733124804 00:05:58.861 07:33:24 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.861 07:33:24 -- dd/posix.sh@61 -- # atime_of=1733124804 00:05:58.861 07:33:24 -- dd/posix.sh@66 -- # sleep 1 00:05:59.798 07:33:25 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.057 [2024-12-02 07:33:25.471236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.057 [2024-12-02 07:33:25.471360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58202 ] 00:06:00.057 [2024-12-02 07:33:25.607659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.057 [2024-12-02 07:33:25.665259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.315  [2024-12-02T07:33:25.939Z] Copying: 512/512 [B] (average 500 kBps) 00:06:00.315 00:06:00.315 07:33:25 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.315 07:33:25 -- dd/posix.sh@69 -- # (( atime_if == 1733124804 )) 00:06:00.315 07:33:25 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.315 07:33:25 -- dd/posix.sh@70 -- # (( atime_of == 1733124804 )) 00:06:00.315 07:33:25 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.573 [2024-12-02 07:33:25.949515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.573 [2024-12-02 07:33:25.949607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58219 ] 00:06:00.573 [2024-12-02 07:33:26.076662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.573 [2024-12-02 07:33:26.123951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.573  [2024-12-02T07:33:26.455Z] Copying: 512/512 [B] (average 500 kBps) 00:06:00.831 00:06:00.831 07:33:26 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.831 07:33:26 -- dd/posix.sh@73 -- # (( atime_if < 1733124806 )) 00:06:00.831 00:06:00.831 real 0m1.935s 00:06:00.831 user 0m0.486s 00:06:00.831 sys 0m0.211s 00:06:00.831 07:33:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.831 ************************************ 00:06:00.831 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:06:00.831 END TEST dd_flag_noatime 00:06:00.831 ************************************ 00:06:00.831 07:33:26 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:00.831 07:33:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.831 07:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.831 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:06:00.831 ************************************ 00:06:00.831 START TEST dd_flags_misc 00:06:00.831 ************************************ 00:06:00.831 07:33:26 -- common/autotest_common.sh@1114 -- # io 00:06:00.831 07:33:26 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:00.831 07:33:26 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:00.831 07:33:26 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:00.831 07:33:26 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:00.831 07:33:26 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:00.831 07:33:26 -- dd/common.sh@98 -- # xtrace_disable 00:06:00.831 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:06:00.831 07:33:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:00.831 07:33:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:00.831 [2024-12-02 07:33:26.436843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.831 [2024-12-02 07:33:26.436934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58240 ] 00:06:01.089 [2024-12-02 07:33:26.574655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.089 [2024-12-02 07:33:26.626122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.089  [2024-12-02T07:33:26.971Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.347 00:06:01.347 07:33:26 -- dd/posix.sh@93 -- # [[ 11k8ovy7fvik6d5xeszmspnqb1csw9naob1burcnbvxhxcbcint782wr2ud6onftxf7sk7qcevhvy7dmup4mqsl6t99ash10abaqohyv2sa0dhr642d4y36qd859unq7kffn88jpyxm2asxzeb9nn6kfxc76236cskl27d3xwr4zfuaxs48d37i1go7ooo494qc07fh7y9tyu8rlkm5qzqmp3aglq2s5jogn5wqdtd0pd9j11cj6mpws3ip80db1ykbiq8d6k07wuw86opsqi5kb988h9jajmk43vensu2mf45p6ao27c7rchre331kskbnupdeq8dsf9erw7awobsgsiq5a2qne97ulerb3z4mygnepr0yd9z554dcgagqbqdv9sxyy55p49pzafzwgga0fpuazeqf97zmsxm96z7gb21mu2v44j5rxs9xtyx5cwebhnjweds3bxs8yk3xtfzyocslcwdv21ss3a6ohawa8n4i53xlr9k7u9mxfgbml == \1\1\k\8\o\v\y\7\f\v\i\k\6\d\5\x\e\s\z\m\s\p\n\q\b\1\c\s\w\9\n\a\o\b\1\b\u\r\c\n\b\v\x\h\x\c\b\c\i\n\t\7\8\2\w\r\2\u\d\6\o\n\f\t\x\f\7\s\k\7\q\c\e\v\h\v\y\7\d\m\u\p\4\m\q\s\l\6\t\9\9\a\s\h\1\0\a\b\a\q\o\h\y\v\2\s\a\0\d\h\r\6\4\2\d\4\y\3\6\q\d\8\5\9\u\n\q\7\k\f\f\n\8\8\j\p\y\x\m\2\a\s\x\z\e\b\9\n\n\6\k\f\x\c\7\6\2\3\6\c\s\k\l\2\7\d\3\x\w\r\4\z\f\u\a\x\s\4\8\d\3\7\i\1\g\o\7\o\o\o\4\9\4\q\c\0\7\f\h\7\y\9\t\y\u\8\r\l\k\m\5\q\z\q\m\p\3\a\g\l\q\2\s\5\j\o\g\n\5\w\q\d\t\d\0\p\d\9\j\1\1\c\j\6\m\p\w\s\3\i\p\8\0\d\b\1\y\k\b\i\q\8\d\6\k\0\7\w\u\w\8\6\o\p\s\q\i\5\k\b\9\8\8\h\9\j\a\j\m\k\4\3\v\e\n\s\u\2\m\f\4\5\p\6\a\o\2\7\c\7\r\c\h\r\e\3\3\1\k\s\k\b\n\u\p\d\e\q\8\d\s\f\9\e\r\w\7\a\w\o\b\s\g\s\i\q\5\a\2\q\n\e\9\7\u\l\e\r\b\3\z\4\m\y\g\n\e\p\r\0\y\d\9\z\5\5\4\d\c\g\a\g\q\b\q\d\v\9\s\x\y\y\5\5\p\4\9\p\z\a\f\z\w\g\g\a\0\f\p\u\a\z\e\q\f\9\7\z\m\s\x\m\9\6\z\7\g\b\2\1\m\u\2\v\4\4\j\5\r\x\s\9\x\t\y\x\5\c\w\e\b\h\n\j\w\e\d\s\3\b\x\s\8\y\k\3\x\t\f\z\y\o\c\s\l\c\w\d\v\2\1\s\s\3\a\6\o\h\a\w\a\8\n\4\i\5\3\x\l\r\9\k\7\u\9\m\x\f\g\b\m\l ]] 00:06:01.347 07:33:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:01.347 07:33:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:01.347 [2024-12-02 07:33:26.869549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.347 [2024-12-02 07:33:26.869657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58248 ] 00:06:01.605 [2024-12-02 07:33:27.000789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.605 [2024-12-02 07:33:27.048896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.605  [2024-12-02T07:33:27.488Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.864 00:06:01.864 07:33:27 -- dd/posix.sh@93 -- # [[ 11k8ovy7fvik6d5xeszmspnqb1csw9naob1burcnbvxhxcbcint782wr2ud6onftxf7sk7qcevhvy7dmup4mqsl6t99ash10abaqohyv2sa0dhr642d4y36qd859unq7kffn88jpyxm2asxzeb9nn6kfxc76236cskl27d3xwr4zfuaxs48d37i1go7ooo494qc07fh7y9tyu8rlkm5qzqmp3aglq2s5jogn5wqdtd0pd9j11cj6mpws3ip80db1ykbiq8d6k07wuw86opsqi5kb988h9jajmk43vensu2mf45p6ao27c7rchre331kskbnupdeq8dsf9erw7awobsgsiq5a2qne97ulerb3z4mygnepr0yd9z554dcgagqbqdv9sxyy55p49pzafzwgga0fpuazeqf97zmsxm96z7gb21mu2v44j5rxs9xtyx5cwebhnjweds3bxs8yk3xtfzyocslcwdv21ss3a6ohawa8n4i53xlr9k7u9mxfgbml == \1\1\k\8\o\v\y\7\f\v\i\k\6\d\5\x\e\s\z\m\s\p\n\q\b\1\c\s\w\9\n\a\o\b\1\b\u\r\c\n\b\v\x\h\x\c\b\c\i\n\t\7\8\2\w\r\2\u\d\6\o\n\f\t\x\f\7\s\k\7\q\c\e\v\h\v\y\7\d\m\u\p\4\m\q\s\l\6\t\9\9\a\s\h\1\0\a\b\a\q\o\h\y\v\2\s\a\0\d\h\r\6\4\2\d\4\y\3\6\q\d\8\5\9\u\n\q\7\k\f\f\n\8\8\j\p\y\x\m\2\a\s\x\z\e\b\9\n\n\6\k\f\x\c\7\6\2\3\6\c\s\k\l\2\7\d\3\x\w\r\4\z\f\u\a\x\s\4\8\d\3\7\i\1\g\o\7\o\o\o\4\9\4\q\c\0\7\f\h\7\y\9\t\y\u\8\r\l\k\m\5\q\z\q\m\p\3\a\g\l\q\2\s\5\j\o\g\n\5\w\q\d\t\d\0\p\d\9\j\1\1\c\j\6\m\p\w\s\3\i\p\8\0\d\b\1\y\k\b\i\q\8\d\6\k\0\7\w\u\w\8\6\o\p\s\q\i\5\k\b\9\8\8\h\9\j\a\j\m\k\4\3\v\e\n\s\u\2\m\f\4\5\p\6\a\o\2\7\c\7\r\c\h\r\e\3\3\1\k\s\k\b\n\u\p\d\e\q\8\d\s\f\9\e\r\w\7\a\w\o\b\s\g\s\i\q\5\a\2\q\n\e\9\7\u\l\e\r\b\3\z\4\m\y\g\n\e\p\r\0\y\d\9\z\5\5\4\d\c\g\a\g\q\b\q\d\v\9\s\x\y\y\5\5\p\4\9\p\z\a\f\z\w\g\g\a\0\f\p\u\a\z\e\q\f\9\7\z\m\s\x\m\9\6\z\7\g\b\2\1\m\u\2\v\4\4\j\5\r\x\s\9\x\t\y\x\5\c\w\e\b\h\n\j\w\e\d\s\3\b\x\s\8\y\k\3\x\t\f\z\y\o\c\s\l\c\w\d\v\2\1\s\s\3\a\6\o\h\a\w\a\8\n\4\i\5\3\x\l\r\9\k\7\u\9\m\x\f\g\b\m\l ]] 00:06:01.864 07:33:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:01.864 07:33:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:01.864 [2024-12-02 07:33:27.275082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.864 [2024-12-02 07:33:27.275162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58255 ] 00:06:01.865 [2024-12-02 07:33:27.403919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.865 [2024-12-02 07:33:27.450045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.124  [2024-12-02T07:33:27.748Z] Copying: 512/512 [B] (average 166 kBps) 00:06:02.124 00:06:02.124 07:33:27 -- dd/posix.sh@93 -- # [[ 11k8ovy7fvik6d5xeszmspnqb1csw9naob1burcnbvxhxcbcint782wr2ud6onftxf7sk7qcevhvy7dmup4mqsl6t99ash10abaqohyv2sa0dhr642d4y36qd859unq7kffn88jpyxm2asxzeb9nn6kfxc76236cskl27d3xwr4zfuaxs48d37i1go7ooo494qc07fh7y9tyu8rlkm5qzqmp3aglq2s5jogn5wqdtd0pd9j11cj6mpws3ip80db1ykbiq8d6k07wuw86opsqi5kb988h9jajmk43vensu2mf45p6ao27c7rchre331kskbnupdeq8dsf9erw7awobsgsiq5a2qne97ulerb3z4mygnepr0yd9z554dcgagqbqdv9sxyy55p49pzafzwgga0fpuazeqf97zmsxm96z7gb21mu2v44j5rxs9xtyx5cwebhnjweds3bxs8yk3xtfzyocslcwdv21ss3a6ohawa8n4i53xlr9k7u9mxfgbml == \1\1\k\8\o\v\y\7\f\v\i\k\6\d\5\x\e\s\z\m\s\p\n\q\b\1\c\s\w\9\n\a\o\b\1\b\u\r\c\n\b\v\x\h\x\c\b\c\i\n\t\7\8\2\w\r\2\u\d\6\o\n\f\t\x\f\7\s\k\7\q\c\e\v\h\v\y\7\d\m\u\p\4\m\q\s\l\6\t\9\9\a\s\h\1\0\a\b\a\q\o\h\y\v\2\s\a\0\d\h\r\6\4\2\d\4\y\3\6\q\d\8\5\9\u\n\q\7\k\f\f\n\8\8\j\p\y\x\m\2\a\s\x\z\e\b\9\n\n\6\k\f\x\c\7\6\2\3\6\c\s\k\l\2\7\d\3\x\w\r\4\z\f\u\a\x\s\4\8\d\3\7\i\1\g\o\7\o\o\o\4\9\4\q\c\0\7\f\h\7\y\9\t\y\u\8\r\l\k\m\5\q\z\q\m\p\3\a\g\l\q\2\s\5\j\o\g\n\5\w\q\d\t\d\0\p\d\9\j\1\1\c\j\6\m\p\w\s\3\i\p\8\0\d\b\1\y\k\b\i\q\8\d\6\k\0\7\w\u\w\8\6\o\p\s\q\i\5\k\b\9\8\8\h\9\j\a\j\m\k\4\3\v\e\n\s\u\2\m\f\4\5\p\6\a\o\2\7\c\7\r\c\h\r\e\3\3\1\k\s\k\b\n\u\p\d\e\q\8\d\s\f\9\e\r\w\7\a\w\o\b\s\g\s\i\q\5\a\2\q\n\e\9\7\u\l\e\r\b\3\z\4\m\y\g\n\e\p\r\0\y\d\9\z\5\5\4\d\c\g\a\g\q\b\q\d\v\9\s\x\y\y\5\5\p\4\9\p\z\a\f\z\w\g\g\a\0\f\p\u\a\z\e\q\f\9\7\z\m\s\x\m\9\6\z\7\g\b\2\1\m\u\2\v\4\4\j\5\r\x\s\9\x\t\y\x\5\c\w\e\b\h\n\j\w\e\d\s\3\b\x\s\8\y\k\3\x\t\f\z\y\o\c\s\l\c\w\d\v\2\1\s\s\3\a\6\o\h\a\w\a\8\n\4\i\5\3\x\l\r\9\k\7\u\9\m\x\f\g\b\m\l ]] 00:06:02.124 07:33:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.124 07:33:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:02.124 [2024-12-02 07:33:27.702484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.124 [2024-12-02 07:33:27.702590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58257 ] 00:06:02.384 [2024-12-02 07:33:27.839680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.384 [2024-12-02 07:33:27.892270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.384  [2024-12-02T07:33:28.267Z] Copying: 512/512 [B] (average 250 kBps) 00:06:02.643 00:06:02.643 07:33:28 -- dd/posix.sh@93 -- # [[ 11k8ovy7fvik6d5xeszmspnqb1csw9naob1burcnbvxhxcbcint782wr2ud6onftxf7sk7qcevhvy7dmup4mqsl6t99ash10abaqohyv2sa0dhr642d4y36qd859unq7kffn88jpyxm2asxzeb9nn6kfxc76236cskl27d3xwr4zfuaxs48d37i1go7ooo494qc07fh7y9tyu8rlkm5qzqmp3aglq2s5jogn5wqdtd0pd9j11cj6mpws3ip80db1ykbiq8d6k07wuw86opsqi5kb988h9jajmk43vensu2mf45p6ao27c7rchre331kskbnupdeq8dsf9erw7awobsgsiq5a2qne97ulerb3z4mygnepr0yd9z554dcgagqbqdv9sxyy55p49pzafzwgga0fpuazeqf97zmsxm96z7gb21mu2v44j5rxs9xtyx5cwebhnjweds3bxs8yk3xtfzyocslcwdv21ss3a6ohawa8n4i53xlr9k7u9mxfgbml == \1\1\k\8\o\v\y\7\f\v\i\k\6\d\5\x\e\s\z\m\s\p\n\q\b\1\c\s\w\9\n\a\o\b\1\b\u\r\c\n\b\v\x\h\x\c\b\c\i\n\t\7\8\2\w\r\2\u\d\6\o\n\f\t\x\f\7\s\k\7\q\c\e\v\h\v\y\7\d\m\u\p\4\m\q\s\l\6\t\9\9\a\s\h\1\0\a\b\a\q\o\h\y\v\2\s\a\0\d\h\r\6\4\2\d\4\y\3\6\q\d\8\5\9\u\n\q\7\k\f\f\n\8\8\j\p\y\x\m\2\a\s\x\z\e\b\9\n\n\6\k\f\x\c\7\6\2\3\6\c\s\k\l\2\7\d\3\x\w\r\4\z\f\u\a\x\s\4\8\d\3\7\i\1\g\o\7\o\o\o\4\9\4\q\c\0\7\f\h\7\y\9\t\y\u\8\r\l\k\m\5\q\z\q\m\p\3\a\g\l\q\2\s\5\j\o\g\n\5\w\q\d\t\d\0\p\d\9\j\1\1\c\j\6\m\p\w\s\3\i\p\8\0\d\b\1\y\k\b\i\q\8\d\6\k\0\7\w\u\w\8\6\o\p\s\q\i\5\k\b\9\8\8\h\9\j\a\j\m\k\4\3\v\e\n\s\u\2\m\f\4\5\p\6\a\o\2\7\c\7\r\c\h\r\e\3\3\1\k\s\k\b\n\u\p\d\e\q\8\d\s\f\9\e\r\w\7\a\w\o\b\s\g\s\i\q\5\a\2\q\n\e\9\7\u\l\e\r\b\3\z\4\m\y\g\n\e\p\r\0\y\d\9\z\5\5\4\d\c\g\a\g\q\b\q\d\v\9\s\x\y\y\5\5\p\4\9\p\z\a\f\z\w\g\g\a\0\f\p\u\a\z\e\q\f\9\7\z\m\s\x\m\9\6\z\7\g\b\2\1\m\u\2\v\4\4\j\5\r\x\s\9\x\t\y\x\5\c\w\e\b\h\n\j\w\e\d\s\3\b\x\s\8\y\k\3\x\t\f\z\y\o\c\s\l\c\w\d\v\2\1\s\s\3\a\6\o\h\a\w\a\8\n\4\i\5\3\x\l\r\9\k\7\u\9\m\x\f\g\b\m\l ]] 00:06:02.644 07:33:28 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:02.644 07:33:28 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:02.644 07:33:28 -- dd/common.sh@98 -- # xtrace_disable 00:06:02.644 07:33:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.644 07:33:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.644 07:33:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:02.644 [2024-12-02 07:33:28.145343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.644 [2024-12-02 07:33:28.145422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58270 ] 00:06:02.903 [2024-12-02 07:33:28.275345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.903 [2024-12-02 07:33:28.321800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.903  [2024-12-02T07:33:28.527Z] Copying: 512/512 [B] (average 500 kBps) 00:06:02.903 00:06:02.903 07:33:28 -- dd/posix.sh@93 -- # [[ r9ik2h8nzhmw7l5z71sleasjhetr42lnot2awjph10ucy0ofb8qr95b36pcqldwar3w9qlrgex1sqtql229r4rutmvnpbkb1d0gdg2cwvh7yjbqtbt2w5x0wtikmms9ho8vqviuya2c91cny6y1wtuqgmz8nf4twpjwwgtlebuj0cirpb8zjckp8xn1qouih7wmzlmah21e335gknj4r2l8skx5rlx4alqrod6xxv4wmnmlnuwyijwp1ac21jf0m9yke43jcpwjcu6gzujvu626opbnv9uxukjd4ir3brhlnsq6g8uy2rjbneri3zauf9506guhtz94lta4v3tywrtt0ydff28acnwgeifv9cxpfifuunz9e3fgxhz1ycueds7d5dqjzyqgcvf141i99toe48uo0sp09be1axodo4jjrdwjyl94qp2lu96knzcfr2fseuwn5bly77mzz4ibr6kop9stdtxfplqjogoo22q26v60eb9o1r81zkb9py8np == \r\9\i\k\2\h\8\n\z\h\m\w\7\l\5\z\7\1\s\l\e\a\s\j\h\e\t\r\4\2\l\n\o\t\2\a\w\j\p\h\1\0\u\c\y\0\o\f\b\8\q\r\9\5\b\3\6\p\c\q\l\d\w\a\r\3\w\9\q\l\r\g\e\x\1\s\q\t\q\l\2\2\9\r\4\r\u\t\m\v\n\p\b\k\b\1\d\0\g\d\g\2\c\w\v\h\7\y\j\b\q\t\b\t\2\w\5\x\0\w\t\i\k\m\m\s\9\h\o\8\v\q\v\i\u\y\a\2\c\9\1\c\n\y\6\y\1\w\t\u\q\g\m\z\8\n\f\4\t\w\p\j\w\w\g\t\l\e\b\u\j\0\c\i\r\p\b\8\z\j\c\k\p\8\x\n\1\q\o\u\i\h\7\w\m\z\l\m\a\h\2\1\e\3\3\5\g\k\n\j\4\r\2\l\8\s\k\x\5\r\l\x\4\a\l\q\r\o\d\6\x\x\v\4\w\m\n\m\l\n\u\w\y\i\j\w\p\1\a\c\2\1\j\f\0\m\9\y\k\e\4\3\j\c\p\w\j\c\u\6\g\z\u\j\v\u\6\2\6\o\p\b\n\v\9\u\x\u\k\j\d\4\i\r\3\b\r\h\l\n\s\q\6\g\8\u\y\2\r\j\b\n\e\r\i\3\z\a\u\f\9\5\0\6\g\u\h\t\z\9\4\l\t\a\4\v\3\t\y\w\r\t\t\0\y\d\f\f\2\8\a\c\n\w\g\e\i\f\v\9\c\x\p\f\i\f\u\u\n\z\9\e\3\f\g\x\h\z\1\y\c\u\e\d\s\7\d\5\d\q\j\z\y\q\g\c\v\f\1\4\1\i\9\9\t\o\e\4\8\u\o\0\s\p\0\9\b\e\1\a\x\o\d\o\4\j\j\r\d\w\j\y\l\9\4\q\p\2\l\u\9\6\k\n\z\c\f\r\2\f\s\e\u\w\n\5\b\l\y\7\7\m\z\z\4\i\b\r\6\k\o\p\9\s\t\d\t\x\f\p\l\q\j\o\g\o\o\2\2\q\2\6\v\6\0\e\b\9\o\1\r\8\1\z\k\b\9\p\y\8\n\p ]] 00:06:02.903 07:33:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.903 07:33:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:03.163 [2024-12-02 07:33:28.552061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.163 [2024-12-02 07:33:28.552140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ] 00:06:03.163 [2024-12-02 07:33:28.680503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.163 [2024-12-02 07:33:28.726901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.163  [2024-12-02T07:33:29.047Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.423 00:06:03.424 07:33:28 -- dd/posix.sh@93 -- # [[ r9ik2h8nzhmw7l5z71sleasjhetr42lnot2awjph10ucy0ofb8qr95b36pcqldwar3w9qlrgex1sqtql229r4rutmvnpbkb1d0gdg2cwvh7yjbqtbt2w5x0wtikmms9ho8vqviuya2c91cny6y1wtuqgmz8nf4twpjwwgtlebuj0cirpb8zjckp8xn1qouih7wmzlmah21e335gknj4r2l8skx5rlx4alqrod6xxv4wmnmlnuwyijwp1ac21jf0m9yke43jcpwjcu6gzujvu626opbnv9uxukjd4ir3brhlnsq6g8uy2rjbneri3zauf9506guhtz94lta4v3tywrtt0ydff28acnwgeifv9cxpfifuunz9e3fgxhz1ycueds7d5dqjzyqgcvf141i99toe48uo0sp09be1axodo4jjrdwjyl94qp2lu96knzcfr2fseuwn5bly77mzz4ibr6kop9stdtxfplqjogoo22q26v60eb9o1r81zkb9py8np == \r\9\i\k\2\h\8\n\z\h\m\w\7\l\5\z\7\1\s\l\e\a\s\j\h\e\t\r\4\2\l\n\o\t\2\a\w\j\p\h\1\0\u\c\y\0\o\f\b\8\q\r\9\5\b\3\6\p\c\q\l\d\w\a\r\3\w\9\q\l\r\g\e\x\1\s\q\t\q\l\2\2\9\r\4\r\u\t\m\v\n\p\b\k\b\1\d\0\g\d\g\2\c\w\v\h\7\y\j\b\q\t\b\t\2\w\5\x\0\w\t\i\k\m\m\s\9\h\o\8\v\q\v\i\u\y\a\2\c\9\1\c\n\y\6\y\1\w\t\u\q\g\m\z\8\n\f\4\t\w\p\j\w\w\g\t\l\e\b\u\j\0\c\i\r\p\b\8\z\j\c\k\p\8\x\n\1\q\o\u\i\h\7\w\m\z\l\m\a\h\2\1\e\3\3\5\g\k\n\j\4\r\2\l\8\s\k\x\5\r\l\x\4\a\l\q\r\o\d\6\x\x\v\4\w\m\n\m\l\n\u\w\y\i\j\w\p\1\a\c\2\1\j\f\0\m\9\y\k\e\4\3\j\c\p\w\j\c\u\6\g\z\u\j\v\u\6\2\6\o\p\b\n\v\9\u\x\u\k\j\d\4\i\r\3\b\r\h\l\n\s\q\6\g\8\u\y\2\r\j\b\n\e\r\i\3\z\a\u\f\9\5\0\6\g\u\h\t\z\9\4\l\t\a\4\v\3\t\y\w\r\t\t\0\y\d\f\f\2\8\a\c\n\w\g\e\i\f\v\9\c\x\p\f\i\f\u\u\n\z\9\e\3\f\g\x\h\z\1\y\c\u\e\d\s\7\d\5\d\q\j\z\y\q\g\c\v\f\1\4\1\i\9\9\t\o\e\4\8\u\o\0\s\p\0\9\b\e\1\a\x\o\d\o\4\j\j\r\d\w\j\y\l\9\4\q\p\2\l\u\9\6\k\n\z\c\f\r\2\f\s\e\u\w\n\5\b\l\y\7\7\m\z\z\4\i\b\r\6\k\o\p\9\s\t\d\t\x\f\p\l\q\j\o\g\o\o\2\2\q\2\6\v\6\0\e\b\9\o\1\r\8\1\z\k\b\9\p\y\8\n\p ]] 00:06:03.424 07:33:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:03.424 07:33:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:03.424 [2024-12-02 07:33:28.965812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.424 [2024-12-02 07:33:28.965889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58280 ] 00:06:03.683 [2024-12-02 07:33:29.094664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.683 [2024-12-02 07:33:29.141210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.683  [2024-12-02T07:33:29.567Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.943 00:06:03.943 07:33:29 -- dd/posix.sh@93 -- # [[ r9ik2h8nzhmw7l5z71sleasjhetr42lnot2awjph10ucy0ofb8qr95b36pcqldwar3w9qlrgex1sqtql229r4rutmvnpbkb1d0gdg2cwvh7yjbqtbt2w5x0wtikmms9ho8vqviuya2c91cny6y1wtuqgmz8nf4twpjwwgtlebuj0cirpb8zjckp8xn1qouih7wmzlmah21e335gknj4r2l8skx5rlx4alqrod6xxv4wmnmlnuwyijwp1ac21jf0m9yke43jcpwjcu6gzujvu626opbnv9uxukjd4ir3brhlnsq6g8uy2rjbneri3zauf9506guhtz94lta4v3tywrtt0ydff28acnwgeifv9cxpfifuunz9e3fgxhz1ycueds7d5dqjzyqgcvf141i99toe48uo0sp09be1axodo4jjrdwjyl94qp2lu96knzcfr2fseuwn5bly77mzz4ibr6kop9stdtxfplqjogoo22q26v60eb9o1r81zkb9py8np == \r\9\i\k\2\h\8\n\z\h\m\w\7\l\5\z\7\1\s\l\e\a\s\j\h\e\t\r\4\2\l\n\o\t\2\a\w\j\p\h\1\0\u\c\y\0\o\f\b\8\q\r\9\5\b\3\6\p\c\q\l\d\w\a\r\3\w\9\q\l\r\g\e\x\1\s\q\t\q\l\2\2\9\r\4\r\u\t\m\v\n\p\b\k\b\1\d\0\g\d\g\2\c\w\v\h\7\y\j\b\q\t\b\t\2\w\5\x\0\w\t\i\k\m\m\s\9\h\o\8\v\q\v\i\u\y\a\2\c\9\1\c\n\y\6\y\1\w\t\u\q\g\m\z\8\n\f\4\t\w\p\j\w\w\g\t\l\e\b\u\j\0\c\i\r\p\b\8\z\j\c\k\p\8\x\n\1\q\o\u\i\h\7\w\m\z\l\m\a\h\2\1\e\3\3\5\g\k\n\j\4\r\2\l\8\s\k\x\5\r\l\x\4\a\l\q\r\o\d\6\x\x\v\4\w\m\n\m\l\n\u\w\y\i\j\w\p\1\a\c\2\1\j\f\0\m\9\y\k\e\4\3\j\c\p\w\j\c\u\6\g\z\u\j\v\u\6\2\6\o\p\b\n\v\9\u\x\u\k\j\d\4\i\r\3\b\r\h\l\n\s\q\6\g\8\u\y\2\r\j\b\n\e\r\i\3\z\a\u\f\9\5\0\6\g\u\h\t\z\9\4\l\t\a\4\v\3\t\y\w\r\t\t\0\y\d\f\f\2\8\a\c\n\w\g\e\i\f\v\9\c\x\p\f\i\f\u\u\n\z\9\e\3\f\g\x\h\z\1\y\c\u\e\d\s\7\d\5\d\q\j\z\y\q\g\c\v\f\1\4\1\i\9\9\t\o\e\4\8\u\o\0\s\p\0\9\b\e\1\a\x\o\d\o\4\j\j\r\d\w\j\y\l\9\4\q\p\2\l\u\9\6\k\n\z\c\f\r\2\f\s\e\u\w\n\5\b\l\y\7\7\m\z\z\4\i\b\r\6\k\o\p\9\s\t\d\t\x\f\p\l\q\j\o\g\o\o\2\2\q\2\6\v\6\0\e\b\9\o\1\r\8\1\z\k\b\9\p\y\8\n\p ]] 00:06:03.943 07:33:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:03.943 07:33:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:03.943 [2024-12-02 07:33:29.391288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.943 [2024-12-02 07:33:29.391384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58289 ] 00:06:03.943 [2024-12-02 07:33:29.519690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.202 [2024-12-02 07:33:29.566289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.202  [2024-12-02T07:33:29.826Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.202 00:06:04.202 07:33:29 -- dd/posix.sh@93 -- # [[ r9ik2h8nzhmw7l5z71sleasjhetr42lnot2awjph10ucy0ofb8qr95b36pcqldwar3w9qlrgex1sqtql229r4rutmvnpbkb1d0gdg2cwvh7yjbqtbt2w5x0wtikmms9ho8vqviuya2c91cny6y1wtuqgmz8nf4twpjwwgtlebuj0cirpb8zjckp8xn1qouih7wmzlmah21e335gknj4r2l8skx5rlx4alqrod6xxv4wmnmlnuwyijwp1ac21jf0m9yke43jcpwjcu6gzujvu626opbnv9uxukjd4ir3brhlnsq6g8uy2rjbneri3zauf9506guhtz94lta4v3tywrtt0ydff28acnwgeifv9cxpfifuunz9e3fgxhz1ycueds7d5dqjzyqgcvf141i99toe48uo0sp09be1axodo4jjrdwjyl94qp2lu96knzcfr2fseuwn5bly77mzz4ibr6kop9stdtxfplqjogoo22q26v60eb9o1r81zkb9py8np == \r\9\i\k\2\h\8\n\z\h\m\w\7\l\5\z\7\1\s\l\e\a\s\j\h\e\t\r\4\2\l\n\o\t\2\a\w\j\p\h\1\0\u\c\y\0\o\f\b\8\q\r\9\5\b\3\6\p\c\q\l\d\w\a\r\3\w\9\q\l\r\g\e\x\1\s\q\t\q\l\2\2\9\r\4\r\u\t\m\v\n\p\b\k\b\1\d\0\g\d\g\2\c\w\v\h\7\y\j\b\q\t\b\t\2\w\5\x\0\w\t\i\k\m\m\s\9\h\o\8\v\q\v\i\u\y\a\2\c\9\1\c\n\y\6\y\1\w\t\u\q\g\m\z\8\n\f\4\t\w\p\j\w\w\g\t\l\e\b\u\j\0\c\i\r\p\b\8\z\j\c\k\p\8\x\n\1\q\o\u\i\h\7\w\m\z\l\m\a\h\2\1\e\3\3\5\g\k\n\j\4\r\2\l\8\s\k\x\5\r\l\x\4\a\l\q\r\o\d\6\x\x\v\4\w\m\n\m\l\n\u\w\y\i\j\w\p\1\a\c\2\1\j\f\0\m\9\y\k\e\4\3\j\c\p\w\j\c\u\6\g\z\u\j\v\u\6\2\6\o\p\b\n\v\9\u\x\u\k\j\d\4\i\r\3\b\r\h\l\n\s\q\6\g\8\u\y\2\r\j\b\n\e\r\i\3\z\a\u\f\9\5\0\6\g\u\h\t\z\9\4\l\t\a\4\v\3\t\y\w\r\t\t\0\y\d\f\f\2\8\a\c\n\w\g\e\i\f\v\9\c\x\p\f\i\f\u\u\n\z\9\e\3\f\g\x\h\z\1\y\c\u\e\d\s\7\d\5\d\q\j\z\y\q\g\c\v\f\1\4\1\i\9\9\t\o\e\4\8\u\o\0\s\p\0\9\b\e\1\a\x\o\d\o\4\j\j\r\d\w\j\y\l\9\4\q\p\2\l\u\9\6\k\n\z\c\f\r\2\f\s\e\u\w\n\5\b\l\y\7\7\m\z\z\4\i\b\r\6\k\o\p\9\s\t\d\t\x\f\p\l\q\j\o\g\o\o\2\2\q\2\6\v\6\0\e\b\9\o\1\r\8\1\z\k\b\9\p\y\8\n\p ]] 00:06:04.202 00:06:04.202 real 0m3.386s 00:06:04.202 user 0m1.778s 00:06:04.202 sys 0m0.637s 00:06:04.202 07:33:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.202 07:33:29 -- common/autotest_common.sh@10 -- # set +x 00:06:04.202 ************************************ 00:06:04.202 END TEST dd_flags_misc 00:06:04.202 ************************************ 00:06:04.202 07:33:29 -- dd/posix.sh@131 -- # tests_forced_aio 00:06:04.203 07:33:29 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:04.203 * Second test run, disabling liburing, forcing AIO 00:06:04.203 07:33:29 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:04.203 07:33:29 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:04.203 07:33:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.203 07:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.203 07:33:29 -- common/autotest_common.sh@10 -- # set +x 00:06:04.203 ************************************ 00:06:04.203 START TEST dd_flag_append_forced_aio 00:06:04.203 ************************************ 00:06:04.203 07:33:29 -- common/autotest_common.sh@1114 -- # append 00:06:04.203 07:33:29 -- dd/posix.sh@16 -- # local dump0 00:06:04.203 07:33:29 -- dd/posix.sh@17 -- # local dump1 00:06:04.203 07:33:29 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:04.203 07:33:29 -- dd/common.sh@98 -- # xtrace_disable 00:06:04.203 07:33:29 -- common/autotest_common.sh@10 -- # set +x 00:06:04.203 07:33:29 -- dd/posix.sh@19 -- # dump0=opan60xdegfntrxuikxs6fljw37wr35g 00:06:04.203 07:33:29 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:04.203 07:33:29 -- dd/common.sh@98 -- # xtrace_disable 00:06:04.203 07:33:29 -- common/autotest_common.sh@10 -- # set +x 00:06:04.203 07:33:29 -- dd/posix.sh@20 -- # dump1=0e78vp9tni499h28bot4o6hhwcumg7am 00:06:04.203 07:33:29 -- dd/posix.sh@22 -- # printf %s opan60xdegfntrxuikxs6fljw37wr35g 00:06:04.203 07:33:29 -- dd/posix.sh@23 -- # printf %s 0e78vp9tni499h28bot4o6hhwcumg7am 00:06:04.203 07:33:29 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:04.462 [2024-12-02 07:33:29.873682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.462 [2024-12-02 07:33:29.873776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:06:04.462 [2024-12-02 07:33:30.011365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.462 [2024-12-02 07:33:30.072238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.722  [2024-12-02T07:33:30.346Z] Copying: 32/32 [B] (average 31 kBps) 00:06:04.722 00:06:04.722 07:33:30 -- dd/posix.sh@27 -- # [[ 0e78vp9tni499h28bot4o6hhwcumg7amopan60xdegfntrxuikxs6fljw37wr35g == \0\e\7\8\v\p\9\t\n\i\4\9\9\h\2\8\b\o\t\4\o\6\h\h\w\c\u\m\g\7\a\m\o\p\a\n\6\0\x\d\e\g\f\n\t\r\x\u\i\k\x\s\6\f\l\j\w\3\7\w\r\3\5\g ]] 00:06:04.722 00:06:04.722 real 0m0.463s 00:06:04.722 user 0m0.242s 00:06:04.722 sys 0m0.102s 00:06:04.722 07:33:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.722 ************************************ 00:06:04.722 END TEST dd_flag_append_forced_aio 00:06:04.722 07:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:04.722 ************************************ 00:06:04.722 07:33:30 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:04.722 07:33:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.722 07:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.722 07:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:04.722 ************************************ 00:06:04.722 START TEST dd_flag_directory_forced_aio 00:06:04.722 ************************************ 00:06:04.722 07:33:30 -- common/autotest_common.sh@1114 -- # directory 00:06:04.722 07:33:30 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:04.722 07:33:30 -- common/autotest_common.sh@650 -- # local es=0 00:06:04.722 07:33:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:04.722 07:33:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.722 07:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.722 07:33:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.722 07:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.722 07:33:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.722 07:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.722 07:33:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:04.722 07:33:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:04.722 07:33:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:04.980 [2024-12-02 07:33:30.385516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.981 [2024-12-02 07:33:30.385613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58342 ] 00:06:04.981 [2024-12-02 07:33:30.522923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.981 [2024-12-02 07:33:30.573663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.240 [2024-12-02 07:33:30.617897] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:05.240 [2024-12-02 07:33:30.617948] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:05.240 [2024-12-02 07:33:30.617975] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.240 [2024-12-02 07:33:30.672354] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:05.240 07:33:30 -- common/autotest_common.sh@653 -- # es=236 00:06:05.240 07:33:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.240 07:33:30 -- common/autotest_common.sh@662 -- # es=108 00:06:05.240 07:33:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.240 07:33:30 -- common/autotest_common.sh@670 -- # es=1 00:06:05.240 07:33:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.240 07:33:30 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:05.240 07:33:30 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.240 07:33:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:05.240 07:33:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.240 07:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.240 07:33:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.240 07:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.240 07:33:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.240 07:33:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.240 07:33:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.240 07:33:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:05.240 07:33:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:05.240 [2024-12-02 07:33:30.816935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.240 [2024-12-02 07:33:30.817028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58346 ] 00:06:05.500 [2024-12-02 07:33:30.953619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.500 [2024-12-02 07:33:31.003459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.500 [2024-12-02 07:33:31.045812] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:05.500 [2024-12-02 07:33:31.045862] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:05.500 [2024-12-02 07:33:31.045889] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.500 [2024-12-02 07:33:31.102837] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:05.759 07:33:31 -- common/autotest_common.sh@653 -- # es=236 00:06:05.759 07:33:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.759 07:33:31 -- common/autotest_common.sh@662 -- # es=108 00:06:05.759 07:33:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.759 07:33:31 -- common/autotest_common.sh@670 -- # es=1 00:06:05.759 07:33:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.759 00:06:05.759 real 0m0.865s 00:06:05.759 user 0m0.474s 00:06:05.759 sys 0m0.183s 00:06:05.759 07:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.759 07:33:31 -- common/autotest_common.sh@10 -- # set +x 00:06:05.759 ************************************ 00:06:05.759 END TEST dd_flag_directory_forced_aio 00:06:05.759 ************************************ 00:06:05.759 07:33:31 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:05.759 07:33:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.759 07:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.759 07:33:31 -- common/autotest_common.sh@10 -- # set +x 00:06:05.759 ************************************ 00:06:05.759 START TEST dd_flag_nofollow_forced_aio 00:06:05.759 ************************************ 00:06:05.759 07:33:31 -- common/autotest_common.sh@1114 -- # nofollow 00:06:05.760 07:33:31 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:05.760 07:33:31 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:05.760 07:33:31 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:05.760 07:33:31 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:05.760 07:33:31 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.760 07:33:31 -- common/autotest_common.sh@650 -- # local es=0 00:06:05.760 07:33:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.760 07:33:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.760 07:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.760 07:33:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.760 07:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.760 07:33:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.760 07:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.760 07:33:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.760 07:33:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:05.760 07:33:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.760 [2024-12-02 07:33:31.306325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.760 [2024-12-02 07:33:31.306431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58380 ] 00:06:06.018 [2024-12-02 07:33:31.442965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.018 [2024-12-02 07:33:31.495438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.018 [2024-12-02 07:33:31.540618] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:06.018 [2024-12-02 07:33:31.540670] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:06.018 [2024-12-02 07:33:31.540697] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.018 [2024-12-02 07:33:31.596347] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:06.290 07:33:31 -- common/autotest_common.sh@653 -- # es=216 00:06:06.290 07:33:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.290 07:33:31 -- common/autotest_common.sh@662 -- # es=88 00:06:06.291 07:33:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:06.291 07:33:31 -- common/autotest_common.sh@670 -- # es=1 00:06:06.291 07:33:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.291 07:33:31 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:06.291 07:33:31 -- common/autotest_common.sh@650 -- # local es=0 00:06:06.291 07:33:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:06.291 07:33:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.291 07:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.291 07:33:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.291 07:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.291 07:33:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.291 07:33:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.291 07:33:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.291 07:33:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:06.291 07:33:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:06.291 [2024-12-02 07:33:31.743072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.291 [2024-12-02 07:33:31.743165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58384 ] 00:06:06.291 [2024-12-02 07:33:31.876448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.550 [2024-12-02 07:33:31.927567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.550 [2024-12-02 07:33:31.970729] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:06.550 [2024-12-02 07:33:31.970776] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:06.550 [2024-12-02 07:33:31.970804] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.550 [2024-12-02 07:33:32.024982] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:06.550 07:33:32 -- common/autotest_common.sh@653 -- # es=216 00:06:06.550 07:33:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.550 07:33:32 -- common/autotest_common.sh@662 -- # es=88 00:06:06.550 07:33:32 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:06.551 07:33:32 -- common/autotest_common.sh@670 -- # es=1 00:06:06.551 07:33:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.551 07:33:32 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:06.551 07:33:32 -- dd/common.sh@98 -- # xtrace_disable 00:06:06.551 07:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.551 07:33:32 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.551 [2024-12-02 07:33:32.170603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.551 [2024-12-02 07:33:32.170695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58392 ] 00:06:06.810 [2024-12-02 07:33:32.306918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.810 [2024-12-02 07:33:32.356577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.810  [2024-12-02T07:33:32.692Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.068 00:06:07.068 07:33:32 -- dd/posix.sh@49 -- # [[ 6nw1sicnnwz0h325kfqj5q0owc9l5nj1mqcj8kaltpeuar3tnxsc4etqs73d7wfxdqvrpxtvgy9r1obky1709gvoblkj3jgexax7ix5q4ojpmyuofwmu0uuby087eyi7lsahqr0wds8xo1c54qgsopue1nbhdfn4d6v8s670xl3hyv5787aq60sx07t1xmegdx7x0yeqo973unrfl9607oxof9dyrdwbmuqfjabi8njroefql8i8wa0y6zas1ppx3nirirn6c5eah2vtcq8yg72ilz8rvjpkbtvmck28s7tecnc4tsuq3ghrp044542c17jzsqmlfn7836q1amhm7cw6msk9txc0os94rmx8z8o2jxgazkvl00fh9dpx90wnzll9ypalw0quxgnm8q8ga5ay8231309tdv3dkuaxr9vhn2vv981byqz97ullnqhnfbv5xp28a8dyewj1t2qd5xxrefzh7eotomjdoa1zbgrwo3ubj6kinq0f6i2o8v9p == \6\n\w\1\s\i\c\n\n\w\z\0\h\3\2\5\k\f\q\j\5\q\0\o\w\c\9\l\5\n\j\1\m\q\c\j\8\k\a\l\t\p\e\u\a\r\3\t\n\x\s\c\4\e\t\q\s\7\3\d\7\w\f\x\d\q\v\r\p\x\t\v\g\y\9\r\1\o\b\k\y\1\7\0\9\g\v\o\b\l\k\j\3\j\g\e\x\a\x\7\i\x\5\q\4\o\j\p\m\y\u\o\f\w\m\u\0\u\u\b\y\0\8\7\e\y\i\7\l\s\a\h\q\r\0\w\d\s\8\x\o\1\c\5\4\q\g\s\o\p\u\e\1\n\b\h\d\f\n\4\d\6\v\8\s\6\7\0\x\l\3\h\y\v\5\7\8\7\a\q\6\0\s\x\0\7\t\1\x\m\e\g\d\x\7\x\0\y\e\q\o\9\7\3\u\n\r\f\l\9\6\0\7\o\x\o\f\9\d\y\r\d\w\b\m\u\q\f\j\a\b\i\8\n\j\r\o\e\f\q\l\8\i\8\w\a\0\y\6\z\a\s\1\p\p\x\3\n\i\r\i\r\n\6\c\5\e\a\h\2\v\t\c\q\8\y\g\7\2\i\l\z\8\r\v\j\p\k\b\t\v\m\c\k\2\8\s\7\t\e\c\n\c\4\t\s\u\q\3\g\h\r\p\0\4\4\5\4\2\c\1\7\j\z\s\q\m\l\f\n\7\8\3\6\q\1\a\m\h\m\7\c\w\6\m\s\k\9\t\x\c\0\o\s\9\4\r\m\x\8\z\8\o\2\j\x\g\a\z\k\v\l\0\0\f\h\9\d\p\x\9\0\w\n\z\l\l\9\y\p\a\l\w\0\q\u\x\g\n\m\8\q\8\g\a\5\a\y\8\2\3\1\3\0\9\t\d\v\3\d\k\u\a\x\r\9\v\h\n\2\v\v\9\8\1\b\y\q\z\9\7\u\l\l\n\q\h\n\f\b\v\5\x\p\2\8\a\8\d\y\e\w\j\1\t\2\q\d\5\x\x\r\e\f\z\h\7\e\o\t\o\m\j\d\o\a\1\z\b\g\r\w\o\3\u\b\j\6\k\i\n\q\0\f\6\i\2\o\8\v\9\p ]] 00:06:07.068 00:06:07.068 real 0m1.313s 00:06:07.068 user 0m0.696s 00:06:07.068 sys 0m0.289s 00:06:07.068 07:33:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.068 07:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:07.068 ************************************ 00:06:07.068 END TEST dd_flag_nofollow_forced_aio 00:06:07.068 ************************************ 00:06:07.068 07:33:32 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:07.068 07:33:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.068 07:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.068 07:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:07.068 ************************************ 00:06:07.068 START TEST dd_flag_noatime_forced_aio 00:06:07.068 ************************************ 00:06:07.068 07:33:32 -- common/autotest_common.sh@1114 -- # noatime 00:06:07.068 07:33:32 -- dd/posix.sh@53 -- # local atime_if 00:06:07.068 07:33:32 -- dd/posix.sh@54 -- # local atime_of 00:06:07.068 07:33:32 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:07.068 07:33:32 -- dd/common.sh@98 -- # xtrace_disable 00:06:07.068 07:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:07.068 07:33:32 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.068 07:33:32 -- dd/posix.sh@60 -- # atime_if=1733124812 00:06:07.068 07:33:32 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.068 07:33:32 -- dd/posix.sh@61 -- # atime_of=1733124812 00:06:07.068 07:33:32 -- dd/posix.sh@66 -- # sleep 1 00:06:08.445 07:33:33 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.445 [2024-12-02 07:33:33.693087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.445 [2024-12-02 07:33:33.693194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58432 ] 00:06:08.445 [2024-12-02 07:33:33.831065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.445 [2024-12-02 07:33:33.893532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.445  [2024-12-02T07:33:34.331Z] Copying: 512/512 [B] (average 500 kBps) 00:06:08.707 00:06:08.707 07:33:34 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.707 07:33:34 -- dd/posix.sh@69 -- # (( atime_if == 1733124812 )) 00:06:08.707 07:33:34 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.707 07:33:34 -- dd/posix.sh@70 -- # (( atime_of == 1733124812 )) 00:06:08.707 07:33:34 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.707 [2024-12-02 07:33:34.146527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.707 [2024-12-02 07:33:34.146614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58444 ] 00:06:08.707 [2024-12-02 07:33:34.278539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.965 [2024-12-02 07:33:34.329291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.965  [2024-12-02T07:33:34.589Z] Copying: 512/512 [B] (average 500 kBps) 00:06:08.965 00:06:08.965 07:33:34 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.965 07:33:34 -- dd/posix.sh@73 -- # (( atime_if < 1733124814 )) 00:06:08.965 00:06:08.965 real 0m1.922s 00:06:08.965 user 0m0.488s 00:06:08.965 sys 0m0.195s 00:06:08.965 07:33:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.965 ************************************ 00:06:08.965 END TEST dd_flag_noatime_forced_aio 00:06:08.965 07:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:08.965 ************************************ 00:06:08.965 07:33:34 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:08.965 07:33:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.965 07:33:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.965 07:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:08.965 ************************************ 00:06:08.965 START TEST dd_flags_misc_forced_aio 00:06:08.965 ************************************ 00:06:09.224 07:33:34 -- common/autotest_common.sh@1114 -- # io 00:06:09.224 07:33:34 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:09.224 07:33:34 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:09.224 07:33:34 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:09.224 07:33:34 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:09.224 07:33:34 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:09.224 07:33:34 -- dd/common.sh@98 -- # xtrace_disable 00:06:09.224 07:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:09.224 07:33:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.224 07:33:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:09.224 [2024-12-02 07:33:34.633019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.224 [2024-12-02 07:33:34.633086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58470 ] 00:06:09.224 [2024-12-02 07:33:34.763250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.224 [2024-12-02 07:33:34.811359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.484  [2024-12-02T07:33:35.108Z] Copying: 512/512 [B] (average 500 kBps) 00:06:09.484 00:06:09.484 07:33:35 -- dd/posix.sh@93 -- # [[ g3m9wo2rxbf8t3zfh4vf1ybfiidmqm3o8xrt6w8hhijy10sbfakstg59tl2xx35kar9q8n1rr7pdllfbskwckiglu64uhbjgveuikl6kyus9bn7m7wbvvhnyq7yf10935f6qqo6xkpicezyakveb5ua4r59k2v6om799jle4vzz0wv016z32r3t2wllo71dza3g727xj756n3sxix3bmzx5dux1y2oruzy9a8kg540kd3vx7n8c8o94njvz0qg3ucaxycddjd9bo4ueb3s5ri2o1wj4iutstag951ifxioz64pr6vg5gksiz052znpktk332l4oq6oagnofvamfc0miv2m3k5ze55jbgpxay2pbrp8h89mhnyuhr55btiu07lhnf3wcgdbolqc0nxcq2xxd8fvz5g98fqh758n6nai5cx2ke6o0gvh0znu32501rn5ffyazyn1i7ry14j3vw32o4grbdzjog26yhj93apwo0gohbdqo2d3aqcldpcp9j == \g\3\m\9\w\o\2\r\x\b\f\8\t\3\z\f\h\4\v\f\1\y\b\f\i\i\d\m\q\m\3\o\8\x\r\t\6\w\8\h\h\i\j\y\1\0\s\b\f\a\k\s\t\g\5\9\t\l\2\x\x\3\5\k\a\r\9\q\8\n\1\r\r\7\p\d\l\l\f\b\s\k\w\c\k\i\g\l\u\6\4\u\h\b\j\g\v\e\u\i\k\l\6\k\y\u\s\9\b\n\7\m\7\w\b\v\v\h\n\y\q\7\y\f\1\0\9\3\5\f\6\q\q\o\6\x\k\p\i\c\e\z\y\a\k\v\e\b\5\u\a\4\r\5\9\k\2\v\6\o\m\7\9\9\j\l\e\4\v\z\z\0\w\v\0\1\6\z\3\2\r\3\t\2\w\l\l\o\7\1\d\z\a\3\g\7\2\7\x\j\7\5\6\n\3\s\x\i\x\3\b\m\z\x\5\d\u\x\1\y\2\o\r\u\z\y\9\a\8\k\g\5\4\0\k\d\3\v\x\7\n\8\c\8\o\9\4\n\j\v\z\0\q\g\3\u\c\a\x\y\c\d\d\j\d\9\b\o\4\u\e\b\3\s\5\r\i\2\o\1\w\j\4\i\u\t\s\t\a\g\9\5\1\i\f\x\i\o\z\6\4\p\r\6\v\g\5\g\k\s\i\z\0\5\2\z\n\p\k\t\k\3\3\2\l\4\o\q\6\o\a\g\n\o\f\v\a\m\f\c\0\m\i\v\2\m\3\k\5\z\e\5\5\j\b\g\p\x\a\y\2\p\b\r\p\8\h\8\9\m\h\n\y\u\h\r\5\5\b\t\i\u\0\7\l\h\n\f\3\w\c\g\d\b\o\l\q\c\0\n\x\c\q\2\x\x\d\8\f\v\z\5\g\9\8\f\q\h\7\5\8\n\6\n\a\i\5\c\x\2\k\e\6\o\0\g\v\h\0\z\n\u\3\2\5\0\1\r\n\5\f\f\y\a\z\y\n\1\i\7\r\y\1\4\j\3\v\w\3\2\o\4\g\r\b\d\z\j\o\g\2\6\y\h\j\9\3\a\p\w\o\0\g\o\h\b\d\q\o\2\d\3\a\q\c\l\d\p\c\p\9\j ]] 00:06:09.484 07:33:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.484 07:33:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:09.484 [2024-12-02 07:33:35.069045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.484 [2024-12-02 07:33:35.069141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58478 ] 00:06:09.743 [2024-12-02 07:33:35.203976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.743 [2024-12-02 07:33:35.254976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.743  [2024-12-02T07:33:35.626Z] Copying: 512/512 [B] (average 500 kBps) 00:06:10.002 00:06:10.002 07:33:35 -- dd/posix.sh@93 -- # [[ g3m9wo2rxbf8t3zfh4vf1ybfiidmqm3o8xrt6w8hhijy10sbfakstg59tl2xx35kar9q8n1rr7pdllfbskwckiglu64uhbjgveuikl6kyus9bn7m7wbvvhnyq7yf10935f6qqo6xkpicezyakveb5ua4r59k2v6om799jle4vzz0wv016z32r3t2wllo71dza3g727xj756n3sxix3bmzx5dux1y2oruzy9a8kg540kd3vx7n8c8o94njvz0qg3ucaxycddjd9bo4ueb3s5ri2o1wj4iutstag951ifxioz64pr6vg5gksiz052znpktk332l4oq6oagnofvamfc0miv2m3k5ze55jbgpxay2pbrp8h89mhnyuhr55btiu07lhnf3wcgdbolqc0nxcq2xxd8fvz5g98fqh758n6nai5cx2ke6o0gvh0znu32501rn5ffyazyn1i7ry14j3vw32o4grbdzjog26yhj93apwo0gohbdqo2d3aqcldpcp9j == \g\3\m\9\w\o\2\r\x\b\f\8\t\3\z\f\h\4\v\f\1\y\b\f\i\i\d\m\q\m\3\o\8\x\r\t\6\w\8\h\h\i\j\y\1\0\s\b\f\a\k\s\t\g\5\9\t\l\2\x\x\3\5\k\a\r\9\q\8\n\1\r\r\7\p\d\l\l\f\b\s\k\w\c\k\i\g\l\u\6\4\u\h\b\j\g\v\e\u\i\k\l\6\k\y\u\s\9\b\n\7\m\7\w\b\v\v\h\n\y\q\7\y\f\1\0\9\3\5\f\6\q\q\o\6\x\k\p\i\c\e\z\y\a\k\v\e\b\5\u\a\4\r\5\9\k\2\v\6\o\m\7\9\9\j\l\e\4\v\z\z\0\w\v\0\1\6\z\3\2\r\3\t\2\w\l\l\o\7\1\d\z\a\3\g\7\2\7\x\j\7\5\6\n\3\s\x\i\x\3\b\m\z\x\5\d\u\x\1\y\2\o\r\u\z\y\9\a\8\k\g\5\4\0\k\d\3\v\x\7\n\8\c\8\o\9\4\n\j\v\z\0\q\g\3\u\c\a\x\y\c\d\d\j\d\9\b\o\4\u\e\b\3\s\5\r\i\2\o\1\w\j\4\i\u\t\s\t\a\g\9\5\1\i\f\x\i\o\z\6\4\p\r\6\v\g\5\g\k\s\i\z\0\5\2\z\n\p\k\t\k\3\3\2\l\4\o\q\6\o\a\g\n\o\f\v\a\m\f\c\0\m\i\v\2\m\3\k\5\z\e\5\5\j\b\g\p\x\a\y\2\p\b\r\p\8\h\8\9\m\h\n\y\u\h\r\5\5\b\t\i\u\0\7\l\h\n\f\3\w\c\g\d\b\o\l\q\c\0\n\x\c\q\2\x\x\d\8\f\v\z\5\g\9\8\f\q\h\7\5\8\n\6\n\a\i\5\c\x\2\k\e\6\o\0\g\v\h\0\z\n\u\3\2\5\0\1\r\n\5\f\f\y\a\z\y\n\1\i\7\r\y\1\4\j\3\v\w\3\2\o\4\g\r\b\d\z\j\o\g\2\6\y\h\j\9\3\a\p\w\o\0\g\o\h\b\d\q\o\2\d\3\a\q\c\l\d\p\c\p\9\j ]] 00:06:10.002 07:33:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.002 07:33:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:10.002 [2024-12-02 07:33:35.502821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.002 [2024-12-02 07:33:35.503004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58485 ] 00:06:10.261 [2024-12-02 07:33:35.629249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.262 [2024-12-02 07:33:35.676104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.262  [2024-12-02T07:33:35.886Z] Copying: 512/512 [B] (average 166 kBps) 00:06:10.262 00:06:10.262 07:33:35 -- dd/posix.sh@93 -- # [[ g3m9wo2rxbf8t3zfh4vf1ybfiidmqm3o8xrt6w8hhijy10sbfakstg59tl2xx35kar9q8n1rr7pdllfbskwckiglu64uhbjgveuikl6kyus9bn7m7wbvvhnyq7yf10935f6qqo6xkpicezyakveb5ua4r59k2v6om799jle4vzz0wv016z32r3t2wllo71dza3g727xj756n3sxix3bmzx5dux1y2oruzy9a8kg540kd3vx7n8c8o94njvz0qg3ucaxycddjd9bo4ueb3s5ri2o1wj4iutstag951ifxioz64pr6vg5gksiz052znpktk332l4oq6oagnofvamfc0miv2m3k5ze55jbgpxay2pbrp8h89mhnyuhr55btiu07lhnf3wcgdbolqc0nxcq2xxd8fvz5g98fqh758n6nai5cx2ke6o0gvh0znu32501rn5ffyazyn1i7ry14j3vw32o4grbdzjog26yhj93apwo0gohbdqo2d3aqcldpcp9j == \g\3\m\9\w\o\2\r\x\b\f\8\t\3\z\f\h\4\v\f\1\y\b\f\i\i\d\m\q\m\3\o\8\x\r\t\6\w\8\h\h\i\j\y\1\0\s\b\f\a\k\s\t\g\5\9\t\l\2\x\x\3\5\k\a\r\9\q\8\n\1\r\r\7\p\d\l\l\f\b\s\k\w\c\k\i\g\l\u\6\4\u\h\b\j\g\v\e\u\i\k\l\6\k\y\u\s\9\b\n\7\m\7\w\b\v\v\h\n\y\q\7\y\f\1\0\9\3\5\f\6\q\q\o\6\x\k\p\i\c\e\z\y\a\k\v\e\b\5\u\a\4\r\5\9\k\2\v\6\o\m\7\9\9\j\l\e\4\v\z\z\0\w\v\0\1\6\z\3\2\r\3\t\2\w\l\l\o\7\1\d\z\a\3\g\7\2\7\x\j\7\5\6\n\3\s\x\i\x\3\b\m\z\x\5\d\u\x\1\y\2\o\r\u\z\y\9\a\8\k\g\5\4\0\k\d\3\v\x\7\n\8\c\8\o\9\4\n\j\v\z\0\q\g\3\u\c\a\x\y\c\d\d\j\d\9\b\o\4\u\e\b\3\s\5\r\i\2\o\1\w\j\4\i\u\t\s\t\a\g\9\5\1\i\f\x\i\o\z\6\4\p\r\6\v\g\5\g\k\s\i\z\0\5\2\z\n\p\k\t\k\3\3\2\l\4\o\q\6\o\a\g\n\o\f\v\a\m\f\c\0\m\i\v\2\m\3\k\5\z\e\5\5\j\b\g\p\x\a\y\2\p\b\r\p\8\h\8\9\m\h\n\y\u\h\r\5\5\b\t\i\u\0\7\l\h\n\f\3\w\c\g\d\b\o\l\q\c\0\n\x\c\q\2\x\x\d\8\f\v\z\5\g\9\8\f\q\h\7\5\8\n\6\n\a\i\5\c\x\2\k\e\6\o\0\g\v\h\0\z\n\u\3\2\5\0\1\r\n\5\f\f\y\a\z\y\n\1\i\7\r\y\1\4\j\3\v\w\3\2\o\4\g\r\b\d\z\j\o\g\2\6\y\h\j\9\3\a\p\w\o\0\g\o\h\b\d\q\o\2\d\3\a\q\c\l\d\p\c\p\9\j ]] 00:06:10.262 07:33:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.262 07:33:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:10.520 [2024-12-02 07:33:35.928710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.520 [2024-12-02 07:33:35.929167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58487 ] 00:06:10.520 [2024-12-02 07:33:36.066647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.520 [2024-12-02 07:33:36.113380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.780  [2024-12-02T07:33:36.404Z] Copying: 512/512 [B] (average 250 kBps) 00:06:10.780 00:06:10.780 07:33:36 -- dd/posix.sh@93 -- # [[ g3m9wo2rxbf8t3zfh4vf1ybfiidmqm3o8xrt6w8hhijy10sbfakstg59tl2xx35kar9q8n1rr7pdllfbskwckiglu64uhbjgveuikl6kyus9bn7m7wbvvhnyq7yf10935f6qqo6xkpicezyakveb5ua4r59k2v6om799jle4vzz0wv016z32r3t2wllo71dza3g727xj756n3sxix3bmzx5dux1y2oruzy9a8kg540kd3vx7n8c8o94njvz0qg3ucaxycddjd9bo4ueb3s5ri2o1wj4iutstag951ifxioz64pr6vg5gksiz052znpktk332l4oq6oagnofvamfc0miv2m3k5ze55jbgpxay2pbrp8h89mhnyuhr55btiu07lhnf3wcgdbolqc0nxcq2xxd8fvz5g98fqh758n6nai5cx2ke6o0gvh0znu32501rn5ffyazyn1i7ry14j3vw32o4grbdzjog26yhj93apwo0gohbdqo2d3aqcldpcp9j == \g\3\m\9\w\o\2\r\x\b\f\8\t\3\z\f\h\4\v\f\1\y\b\f\i\i\d\m\q\m\3\o\8\x\r\t\6\w\8\h\h\i\j\y\1\0\s\b\f\a\k\s\t\g\5\9\t\l\2\x\x\3\5\k\a\r\9\q\8\n\1\r\r\7\p\d\l\l\f\b\s\k\w\c\k\i\g\l\u\6\4\u\h\b\j\g\v\e\u\i\k\l\6\k\y\u\s\9\b\n\7\m\7\w\b\v\v\h\n\y\q\7\y\f\1\0\9\3\5\f\6\q\q\o\6\x\k\p\i\c\e\z\y\a\k\v\e\b\5\u\a\4\r\5\9\k\2\v\6\o\m\7\9\9\j\l\e\4\v\z\z\0\w\v\0\1\6\z\3\2\r\3\t\2\w\l\l\o\7\1\d\z\a\3\g\7\2\7\x\j\7\5\6\n\3\s\x\i\x\3\b\m\z\x\5\d\u\x\1\y\2\o\r\u\z\y\9\a\8\k\g\5\4\0\k\d\3\v\x\7\n\8\c\8\o\9\4\n\j\v\z\0\q\g\3\u\c\a\x\y\c\d\d\j\d\9\b\o\4\u\e\b\3\s\5\r\i\2\o\1\w\j\4\i\u\t\s\t\a\g\9\5\1\i\f\x\i\o\z\6\4\p\r\6\v\g\5\g\k\s\i\z\0\5\2\z\n\p\k\t\k\3\3\2\l\4\o\q\6\o\a\g\n\o\f\v\a\m\f\c\0\m\i\v\2\m\3\k\5\z\e\5\5\j\b\g\p\x\a\y\2\p\b\r\p\8\h\8\9\m\h\n\y\u\h\r\5\5\b\t\i\u\0\7\l\h\n\f\3\w\c\g\d\b\o\l\q\c\0\n\x\c\q\2\x\x\d\8\f\v\z\5\g\9\8\f\q\h\7\5\8\n\6\n\a\i\5\c\x\2\k\e\6\o\0\g\v\h\0\z\n\u\3\2\5\0\1\r\n\5\f\f\y\a\z\y\n\1\i\7\r\y\1\4\j\3\v\w\3\2\o\4\g\r\b\d\z\j\o\g\2\6\y\h\j\9\3\a\p\w\o\0\g\o\h\b\d\q\o\2\d\3\a\q\c\l\d\p\c\p\9\j ]] 00:06:10.780 07:33:36 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:10.780 07:33:36 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:10.780 07:33:36 -- dd/common.sh@98 -- # xtrace_disable 00:06:10.780 07:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:10.780 07:33:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.780 07:33:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:10.780 [2024-12-02 07:33:36.380229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.780 [2024-12-02 07:33:36.380343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58495 ] 00:06:11.039 [2024-12-02 07:33:36.518622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.039 [2024-12-02 07:33:36.568200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.039  [2024-12-02T07:33:36.922Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.298 00:06:11.298 07:33:36 -- dd/posix.sh@93 -- # [[ rsgyzndui0gli0vkrqv1sma02amm8c6i0mdk9gwapj5nwqlfxz79yqmj9uscpybwal57fa69030hu7l0xuaffq1n74eufx259ey1daad9e3rt3yzwnt93mwuxulo7tj4x4ez6mw9vimnytfh8zr10ejetcmrsm2zsyrvizum8p7ua05du8d8y1rpp7thv4twh02snpbnhtb1cnh9tn1vr28otvbf7qaw5dpad94n0yct19n02z2ua5oz5jsa6cyuzgrgsfdlhh1whffaodrjmf290xe3sfgoxyqgoimvans9gydtfpihyv9ydwume7mw8g952p2trcpw5h99ynafacvnzvvlg6xxfmazmlua8ys1t4osif0macsyzpwhmctcrwkhc55lwjoo910mspujs1thj0vshrihzrq5swl7orzgqib7jspyxnozzi2y1d2ndj0z6ow6xefqqap71zabt7sgbifgvzorg7ro3mpcu3k8relygpxgb61jub2eze36 == \r\s\g\y\z\n\d\u\i\0\g\l\i\0\v\k\r\q\v\1\s\m\a\0\2\a\m\m\8\c\6\i\0\m\d\k\9\g\w\a\p\j\5\n\w\q\l\f\x\z\7\9\y\q\m\j\9\u\s\c\p\y\b\w\a\l\5\7\f\a\6\9\0\3\0\h\u\7\l\0\x\u\a\f\f\q\1\n\7\4\e\u\f\x\2\5\9\e\y\1\d\a\a\d\9\e\3\r\t\3\y\z\w\n\t\9\3\m\w\u\x\u\l\o\7\t\j\4\x\4\e\z\6\m\w\9\v\i\m\n\y\t\f\h\8\z\r\1\0\e\j\e\t\c\m\r\s\m\2\z\s\y\r\v\i\z\u\m\8\p\7\u\a\0\5\d\u\8\d\8\y\1\r\p\p\7\t\h\v\4\t\w\h\0\2\s\n\p\b\n\h\t\b\1\c\n\h\9\t\n\1\v\r\2\8\o\t\v\b\f\7\q\a\w\5\d\p\a\d\9\4\n\0\y\c\t\1\9\n\0\2\z\2\u\a\5\o\z\5\j\s\a\6\c\y\u\z\g\r\g\s\f\d\l\h\h\1\w\h\f\f\a\o\d\r\j\m\f\2\9\0\x\e\3\s\f\g\o\x\y\q\g\o\i\m\v\a\n\s\9\g\y\d\t\f\p\i\h\y\v\9\y\d\w\u\m\e\7\m\w\8\g\9\5\2\p\2\t\r\c\p\w\5\h\9\9\y\n\a\f\a\c\v\n\z\v\v\l\g\6\x\x\f\m\a\z\m\l\u\a\8\y\s\1\t\4\o\s\i\f\0\m\a\c\s\y\z\p\w\h\m\c\t\c\r\w\k\h\c\5\5\l\w\j\o\o\9\1\0\m\s\p\u\j\s\1\t\h\j\0\v\s\h\r\i\h\z\r\q\5\s\w\l\7\o\r\z\g\q\i\b\7\j\s\p\y\x\n\o\z\z\i\2\y\1\d\2\n\d\j\0\z\6\o\w\6\x\e\f\q\q\a\p\7\1\z\a\b\t\7\s\g\b\i\f\g\v\z\o\r\g\7\r\o\3\m\p\c\u\3\k\8\r\e\l\y\g\p\x\g\b\6\1\j\u\b\2\e\z\e\3\6 ]] 00:06:11.298 07:33:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.298 07:33:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:11.298 [2024-12-02 07:33:36.825634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.298 [2024-12-02 07:33:36.825728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58502 ] 00:06:11.556 [2024-12-02 07:33:36.957957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.556 [2024-12-02 07:33:37.007227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.556  [2024-12-02T07:33:37.440Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.816 00:06:11.816 07:33:37 -- dd/posix.sh@93 -- # [[ rsgyzndui0gli0vkrqv1sma02amm8c6i0mdk9gwapj5nwqlfxz79yqmj9uscpybwal57fa69030hu7l0xuaffq1n74eufx259ey1daad9e3rt3yzwnt93mwuxulo7tj4x4ez6mw9vimnytfh8zr10ejetcmrsm2zsyrvizum8p7ua05du8d8y1rpp7thv4twh02snpbnhtb1cnh9tn1vr28otvbf7qaw5dpad94n0yct19n02z2ua5oz5jsa6cyuzgrgsfdlhh1whffaodrjmf290xe3sfgoxyqgoimvans9gydtfpihyv9ydwume7mw8g952p2trcpw5h99ynafacvnzvvlg6xxfmazmlua8ys1t4osif0macsyzpwhmctcrwkhc55lwjoo910mspujs1thj0vshrihzrq5swl7orzgqib7jspyxnozzi2y1d2ndj0z6ow6xefqqap71zabt7sgbifgvzorg7ro3mpcu3k8relygpxgb61jub2eze36 == \r\s\g\y\z\n\d\u\i\0\g\l\i\0\v\k\r\q\v\1\s\m\a\0\2\a\m\m\8\c\6\i\0\m\d\k\9\g\w\a\p\j\5\n\w\q\l\f\x\z\7\9\y\q\m\j\9\u\s\c\p\y\b\w\a\l\5\7\f\a\6\9\0\3\0\h\u\7\l\0\x\u\a\f\f\q\1\n\7\4\e\u\f\x\2\5\9\e\y\1\d\a\a\d\9\e\3\r\t\3\y\z\w\n\t\9\3\m\w\u\x\u\l\o\7\t\j\4\x\4\e\z\6\m\w\9\v\i\m\n\y\t\f\h\8\z\r\1\0\e\j\e\t\c\m\r\s\m\2\z\s\y\r\v\i\z\u\m\8\p\7\u\a\0\5\d\u\8\d\8\y\1\r\p\p\7\t\h\v\4\t\w\h\0\2\s\n\p\b\n\h\t\b\1\c\n\h\9\t\n\1\v\r\2\8\o\t\v\b\f\7\q\a\w\5\d\p\a\d\9\4\n\0\y\c\t\1\9\n\0\2\z\2\u\a\5\o\z\5\j\s\a\6\c\y\u\z\g\r\g\s\f\d\l\h\h\1\w\h\f\f\a\o\d\r\j\m\f\2\9\0\x\e\3\s\f\g\o\x\y\q\g\o\i\m\v\a\n\s\9\g\y\d\t\f\p\i\h\y\v\9\y\d\w\u\m\e\7\m\w\8\g\9\5\2\p\2\t\r\c\p\w\5\h\9\9\y\n\a\f\a\c\v\n\z\v\v\l\g\6\x\x\f\m\a\z\m\l\u\a\8\y\s\1\t\4\o\s\i\f\0\m\a\c\s\y\z\p\w\h\m\c\t\c\r\w\k\h\c\5\5\l\w\j\o\o\9\1\0\m\s\p\u\j\s\1\t\h\j\0\v\s\h\r\i\h\z\r\q\5\s\w\l\7\o\r\z\g\q\i\b\7\j\s\p\y\x\n\o\z\z\i\2\y\1\d\2\n\d\j\0\z\6\o\w\6\x\e\f\q\q\a\p\7\1\z\a\b\t\7\s\g\b\i\f\g\v\z\o\r\g\7\r\o\3\m\p\c\u\3\k\8\r\e\l\y\g\p\x\g\b\6\1\j\u\b\2\e\z\e\3\6 ]] 00:06:11.816 07:33:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.816 07:33:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:11.816 [2024-12-02 07:33:37.255086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.816 [2024-12-02 07:33:37.255182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58510 ] 00:06:11.816 [2024-12-02 07:33:37.390109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.075 [2024-12-02 07:33:37.445086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.075  [2024-12-02T07:33:37.699Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.075 00:06:12.075 07:33:37 -- dd/posix.sh@93 -- # [[ rsgyzndui0gli0vkrqv1sma02amm8c6i0mdk9gwapj5nwqlfxz79yqmj9uscpybwal57fa69030hu7l0xuaffq1n74eufx259ey1daad9e3rt3yzwnt93mwuxulo7tj4x4ez6mw9vimnytfh8zr10ejetcmrsm2zsyrvizum8p7ua05du8d8y1rpp7thv4twh02snpbnhtb1cnh9tn1vr28otvbf7qaw5dpad94n0yct19n02z2ua5oz5jsa6cyuzgrgsfdlhh1whffaodrjmf290xe3sfgoxyqgoimvans9gydtfpihyv9ydwume7mw8g952p2trcpw5h99ynafacvnzvvlg6xxfmazmlua8ys1t4osif0macsyzpwhmctcrwkhc55lwjoo910mspujs1thj0vshrihzrq5swl7orzgqib7jspyxnozzi2y1d2ndj0z6ow6xefqqap71zabt7sgbifgvzorg7ro3mpcu3k8relygpxgb61jub2eze36 == \r\s\g\y\z\n\d\u\i\0\g\l\i\0\v\k\r\q\v\1\s\m\a\0\2\a\m\m\8\c\6\i\0\m\d\k\9\g\w\a\p\j\5\n\w\q\l\f\x\z\7\9\y\q\m\j\9\u\s\c\p\y\b\w\a\l\5\7\f\a\6\9\0\3\0\h\u\7\l\0\x\u\a\f\f\q\1\n\7\4\e\u\f\x\2\5\9\e\y\1\d\a\a\d\9\e\3\r\t\3\y\z\w\n\t\9\3\m\w\u\x\u\l\o\7\t\j\4\x\4\e\z\6\m\w\9\v\i\m\n\y\t\f\h\8\z\r\1\0\e\j\e\t\c\m\r\s\m\2\z\s\y\r\v\i\z\u\m\8\p\7\u\a\0\5\d\u\8\d\8\y\1\r\p\p\7\t\h\v\4\t\w\h\0\2\s\n\p\b\n\h\t\b\1\c\n\h\9\t\n\1\v\r\2\8\o\t\v\b\f\7\q\a\w\5\d\p\a\d\9\4\n\0\y\c\t\1\9\n\0\2\z\2\u\a\5\o\z\5\j\s\a\6\c\y\u\z\g\r\g\s\f\d\l\h\h\1\w\h\f\f\a\o\d\r\j\m\f\2\9\0\x\e\3\s\f\g\o\x\y\q\g\o\i\m\v\a\n\s\9\g\y\d\t\f\p\i\h\y\v\9\y\d\w\u\m\e\7\m\w\8\g\9\5\2\p\2\t\r\c\p\w\5\h\9\9\y\n\a\f\a\c\v\n\z\v\v\l\g\6\x\x\f\m\a\z\m\l\u\a\8\y\s\1\t\4\o\s\i\f\0\m\a\c\s\y\z\p\w\h\m\c\t\c\r\w\k\h\c\5\5\l\w\j\o\o\9\1\0\m\s\p\u\j\s\1\t\h\j\0\v\s\h\r\i\h\z\r\q\5\s\w\l\7\o\r\z\g\q\i\b\7\j\s\p\y\x\n\o\z\z\i\2\y\1\d\2\n\d\j\0\z\6\o\w\6\x\e\f\q\q\a\p\7\1\z\a\b\t\7\s\g\b\i\f\g\v\z\o\r\g\7\r\o\3\m\p\c\u\3\k\8\r\e\l\y\g\p\x\g\b\6\1\j\u\b\2\e\z\e\3\6 ]] 00:06:12.075 07:33:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.075 07:33:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:12.335 [2024-12-02 07:33:37.706418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.335 [2024-12-02 07:33:37.706671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58517 ] 00:06:12.335 [2024-12-02 07:33:37.842485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.335 [2024-12-02 07:33:37.890669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.335  [2024-12-02T07:33:38.218Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.594 00:06:12.595 07:33:38 -- dd/posix.sh@93 -- # [[ rsgyzndui0gli0vkrqv1sma02amm8c6i0mdk9gwapj5nwqlfxz79yqmj9uscpybwal57fa69030hu7l0xuaffq1n74eufx259ey1daad9e3rt3yzwnt93mwuxulo7tj4x4ez6mw9vimnytfh8zr10ejetcmrsm2zsyrvizum8p7ua05du8d8y1rpp7thv4twh02snpbnhtb1cnh9tn1vr28otvbf7qaw5dpad94n0yct19n02z2ua5oz5jsa6cyuzgrgsfdlhh1whffaodrjmf290xe3sfgoxyqgoimvans9gydtfpihyv9ydwume7mw8g952p2trcpw5h99ynafacvnzvvlg6xxfmazmlua8ys1t4osif0macsyzpwhmctcrwkhc55lwjoo910mspujs1thj0vshrihzrq5swl7orzgqib7jspyxnozzi2y1d2ndj0z6ow6xefqqap71zabt7sgbifgvzorg7ro3mpcu3k8relygpxgb61jub2eze36 == \r\s\g\y\z\n\d\u\i\0\g\l\i\0\v\k\r\q\v\1\s\m\a\0\2\a\m\m\8\c\6\i\0\m\d\k\9\g\w\a\p\j\5\n\w\q\l\f\x\z\7\9\y\q\m\j\9\u\s\c\p\y\b\w\a\l\5\7\f\a\6\9\0\3\0\h\u\7\l\0\x\u\a\f\f\q\1\n\7\4\e\u\f\x\2\5\9\e\y\1\d\a\a\d\9\e\3\r\t\3\y\z\w\n\t\9\3\m\w\u\x\u\l\o\7\t\j\4\x\4\e\z\6\m\w\9\v\i\m\n\y\t\f\h\8\z\r\1\0\e\j\e\t\c\m\r\s\m\2\z\s\y\r\v\i\z\u\m\8\p\7\u\a\0\5\d\u\8\d\8\y\1\r\p\p\7\t\h\v\4\t\w\h\0\2\s\n\p\b\n\h\t\b\1\c\n\h\9\t\n\1\v\r\2\8\o\t\v\b\f\7\q\a\w\5\d\p\a\d\9\4\n\0\y\c\t\1\9\n\0\2\z\2\u\a\5\o\z\5\j\s\a\6\c\y\u\z\g\r\g\s\f\d\l\h\h\1\w\h\f\f\a\o\d\r\j\m\f\2\9\0\x\e\3\s\f\g\o\x\y\q\g\o\i\m\v\a\n\s\9\g\y\d\t\f\p\i\h\y\v\9\y\d\w\u\m\e\7\m\w\8\g\9\5\2\p\2\t\r\c\p\w\5\h\9\9\y\n\a\f\a\c\v\n\z\v\v\l\g\6\x\x\f\m\a\z\m\l\u\a\8\y\s\1\t\4\o\s\i\f\0\m\a\c\s\y\z\p\w\h\m\c\t\c\r\w\k\h\c\5\5\l\w\j\o\o\9\1\0\m\s\p\u\j\s\1\t\h\j\0\v\s\h\r\i\h\z\r\q\5\s\w\l\7\o\r\z\g\q\i\b\7\j\s\p\y\x\n\o\z\z\i\2\y\1\d\2\n\d\j\0\z\6\o\w\6\x\e\f\q\q\a\p\7\1\z\a\b\t\7\s\g\b\i\f\g\v\z\o\r\g\7\r\o\3\m\p\c\u\3\k\8\r\e\l\y\g\p\x\g\b\6\1\j\u\b\2\e\z\e\3\6 ]] 00:06:12.595 00:06:12.595 real 0m3.513s 00:06:12.595 user 0m1.825s 00:06:12.595 sys 0m0.707s 00:06:12.595 07:33:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.595 ************************************ 00:06:12.595 END TEST dd_flags_misc_forced_aio 00:06:12.595 ************************************ 00:06:12.595 07:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:12.595 07:33:38 -- dd/posix.sh@1 -- # cleanup 00:06:12.595 07:33:38 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:12.595 07:33:38 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:12.595 ************************************ 00:06:12.595 END TEST spdk_dd_posix 00:06:12.595 ************************************ 00:06:12.595 00:06:12.595 real 0m16.661s 00:06:12.595 user 0m7.651s 00:06:12.595 sys 0m3.219s 00:06:12.595 07:33:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.595 07:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:12.595 07:33:38 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:12.595 07:33:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.595 07:33:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.595 07:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:12.595 ************************************ 00:06:12.595 START TEST spdk_dd_malloc 00:06:12.595 ************************************ 00:06:12.595 07:33:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:12.854 * Looking for test storage... 00:06:12.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.854 07:33:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:12.854 07:33:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:12.854 07:33:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:12.854 07:33:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:12.854 07:33:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:12.854 07:33:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:12.854 07:33:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:12.854 07:33:38 -- scripts/common.sh@335 -- # IFS=.-: 00:06:12.854 07:33:38 -- scripts/common.sh@335 -- # read -ra ver1 00:06:12.854 07:33:38 -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.854 07:33:38 -- scripts/common.sh@336 -- # read -ra ver2 00:06:12.854 07:33:38 -- scripts/common.sh@337 -- # local 'op=<' 00:06:12.854 07:33:38 -- scripts/common.sh@339 -- # ver1_l=2 00:06:12.854 07:33:38 -- scripts/common.sh@340 -- # ver2_l=1 00:06:12.854 07:33:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:12.854 07:33:38 -- scripts/common.sh@343 -- # case "$op" in 00:06:12.854 07:33:38 -- scripts/common.sh@344 -- # : 1 00:06:12.854 07:33:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:12.854 07:33:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.854 07:33:38 -- scripts/common.sh@364 -- # decimal 1 00:06:12.854 07:33:38 -- scripts/common.sh@352 -- # local d=1 00:06:12.855 07:33:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.855 07:33:38 -- scripts/common.sh@354 -- # echo 1 00:06:12.855 07:33:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:12.855 07:33:38 -- scripts/common.sh@365 -- # decimal 2 00:06:12.855 07:33:38 -- scripts/common.sh@352 -- # local d=2 00:06:12.855 07:33:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.855 07:33:38 -- scripts/common.sh@354 -- # echo 2 00:06:12.855 07:33:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:12.855 07:33:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:12.855 07:33:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:12.855 07:33:38 -- scripts/common.sh@367 -- # return 0 00:06:12.855 07:33:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.855 07:33:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:12.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.855 --rc genhtml_branch_coverage=1 00:06:12.855 --rc genhtml_function_coverage=1 00:06:12.855 --rc genhtml_legend=1 00:06:12.855 --rc geninfo_all_blocks=1 00:06:12.855 --rc geninfo_unexecuted_blocks=1 00:06:12.855 00:06:12.855 ' 00:06:12.855 07:33:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:12.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.855 --rc genhtml_branch_coverage=1 00:06:12.855 --rc genhtml_function_coverage=1 00:06:12.855 --rc genhtml_legend=1 00:06:12.855 --rc geninfo_all_blocks=1 00:06:12.855 --rc geninfo_unexecuted_blocks=1 00:06:12.855 00:06:12.855 ' 00:06:12.855 07:33:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:12.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.855 --rc genhtml_branch_coverage=1 00:06:12.855 --rc genhtml_function_coverage=1 00:06:12.855 --rc genhtml_legend=1 00:06:12.855 --rc geninfo_all_blocks=1 00:06:12.855 --rc geninfo_unexecuted_blocks=1 00:06:12.855 00:06:12.855 ' 00:06:12.855 07:33:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:12.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.855 --rc genhtml_branch_coverage=1 00:06:12.855 --rc genhtml_function_coverage=1 00:06:12.855 --rc genhtml_legend=1 00:06:12.855 --rc geninfo_all_blocks=1 00:06:12.855 --rc geninfo_unexecuted_blocks=1 00:06:12.855 00:06:12.855 ' 00:06:12.855 07:33:38 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.855 07:33:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.855 07:33:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.855 07:33:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.855 07:33:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.855 07:33:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.855 07:33:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.855 07:33:38 -- paths/export.sh@5 -- # export PATH 00:06:12.855 07:33:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.855 07:33:38 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:12.855 07:33:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.855 07:33:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.855 07:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:12.855 ************************************ 00:06:12.855 START TEST dd_malloc_copy 00:06:12.855 ************************************ 00:06:12.855 07:33:38 -- common/autotest_common.sh@1114 -- # malloc_copy 00:06:12.855 07:33:38 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:12.855 07:33:38 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:12.855 07:33:38 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:12.855 07:33:38 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:12.855 07:33:38 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:12.855 07:33:38 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:12.855 07:33:38 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:12.855 07:33:38 -- dd/malloc.sh@28 -- # gen_conf 00:06:12.855 07:33:38 -- dd/common.sh@31 -- # xtrace_disable 00:06:12.855 07:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:12.855 [2024-12-02 07:33:38.442005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.855 [2024-12-02 07:33:38.442660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58593 ] 00:06:12.855 { 00:06:12.855 "subsystems": [ 00:06:12.855 { 00:06:12.855 "subsystem": "bdev", 00:06:12.855 "config": [ 00:06:12.855 { 00:06:12.855 "params": { 00:06:12.855 "block_size": 512, 00:06:12.855 "num_blocks": 1048576, 00:06:12.855 "name": "malloc0" 00:06:12.855 }, 00:06:12.855 "method": "bdev_malloc_create" 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "params": { 00:06:12.855 "block_size": 512, 00:06:12.855 "num_blocks": 1048576, 00:06:12.855 "name": "malloc1" 00:06:12.855 }, 00:06:12.855 "method": "bdev_malloc_create" 00:06:12.855 }, 00:06:12.855 { 00:06:12.855 "method": "bdev_wait_for_examine" 00:06:12.855 } 00:06:12.855 ] 00:06:12.855 } 00:06:12.855 ] 00:06:12.855 } 00:06:13.114 [2024-12-02 07:33:38.564903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.114 [2024-12-02 07:33:38.610551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.493  [2024-12-02T07:33:41.054Z] Copying: 256/512 [MB] (256 MBps) [2024-12-02T07:33:41.335Z] Copying: 512/512 [MB] (average 256 MBps) 00:06:15.711 00:06:15.711 07:33:41 -- dd/malloc.sh@33 -- # gen_conf 00:06:15.711 07:33:41 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:15.711 07:33:41 -- dd/common.sh@31 -- # xtrace_disable 00:06:15.711 07:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 [2024-12-02 07:33:41.188476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.711 [2024-12-02 07:33:41.188572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58635 ] 00:06:15.711 { 00:06:15.711 "subsystems": [ 00:06:15.711 { 00:06:15.711 "subsystem": "bdev", 00:06:15.711 "config": [ 00:06:15.711 { 00:06:15.711 "params": { 00:06:15.711 "block_size": 512, 00:06:15.711 "num_blocks": 1048576, 00:06:15.711 "name": "malloc0" 00:06:15.711 }, 00:06:15.711 "method": "bdev_malloc_create" 00:06:15.711 }, 00:06:15.711 { 00:06:15.711 "params": { 00:06:15.711 "block_size": 512, 00:06:15.711 "num_blocks": 1048576, 00:06:15.711 "name": "malloc1" 00:06:15.711 }, 00:06:15.711 "method": "bdev_malloc_create" 00:06:15.711 }, 00:06:15.711 { 00:06:15.711 "method": "bdev_wait_for_examine" 00:06:15.711 } 00:06:15.711 ] 00:06:15.711 } 00:06:15.712 ] 00:06:15.712 } 00:06:15.712 [2024-12-02 07:33:41.326390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.979 [2024-12-02 07:33:41.377459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.355  [2024-12-02T07:33:43.918Z] Copying: 256/512 [MB] (256 MBps) [2024-12-02T07:33:43.918Z] Copying: 512/512 [MB] (average 257 MBps) 00:06:18.294 00:06:18.294 ************************************ 00:06:18.294 END TEST dd_malloc_copy 00:06:18.294 ************************************ 00:06:18.294 00:06:18.294 real 0m5.492s 00:06:18.294 user 0m4.904s 00:06:18.294 sys 0m0.445s 00:06:18.294 07:33:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.294 07:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.568 ************************************ 00:06:18.568 END TEST spdk_dd_malloc 00:06:18.568 ************************************ 00:06:18.568 00:06:18.568 real 0m5.733s 00:06:18.568 user 0m5.031s 00:06:18.568 sys 0m0.558s 00:06:18.568 07:33:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.568 07:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.568 07:33:43 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:06:18.568 07:33:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:18.568 07:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.568 07:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.568 ************************************ 00:06:18.568 START TEST spdk_dd_bdev_to_bdev 00:06:18.568 ************************************ 00:06:18.568 07:33:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:06:18.568 * Looking for test storage... 00:06:18.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:18.568 07:33:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:18.568 07:33:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:18.568 07:33:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:18.568 07:33:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:18.568 07:33:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:18.568 07:33:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:18.568 07:33:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:18.568 07:33:44 -- scripts/common.sh@335 -- # IFS=.-: 00:06:18.568 07:33:44 -- scripts/common.sh@335 -- # read -ra ver1 00:06:18.568 07:33:44 -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.568 07:33:44 -- scripts/common.sh@336 -- # read -ra ver2 00:06:18.568 07:33:44 -- scripts/common.sh@337 -- # local 'op=<' 00:06:18.568 07:33:44 -- scripts/common.sh@339 -- # ver1_l=2 00:06:18.568 07:33:44 -- scripts/common.sh@340 -- # ver2_l=1 00:06:18.568 07:33:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:18.568 07:33:44 -- scripts/common.sh@343 -- # case "$op" in 00:06:18.568 07:33:44 -- scripts/common.sh@344 -- # : 1 00:06:18.568 07:33:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:18.568 07:33:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.568 07:33:44 -- scripts/common.sh@364 -- # decimal 1 00:06:18.568 07:33:44 -- scripts/common.sh@352 -- # local d=1 00:06:18.568 07:33:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.568 07:33:44 -- scripts/common.sh@354 -- # echo 1 00:06:18.568 07:33:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:18.568 07:33:44 -- scripts/common.sh@365 -- # decimal 2 00:06:18.568 07:33:44 -- scripts/common.sh@352 -- # local d=2 00:06:18.568 07:33:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.568 07:33:44 -- scripts/common.sh@354 -- # echo 2 00:06:18.568 07:33:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:18.568 07:33:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:18.568 07:33:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:18.568 07:33:44 -- scripts/common.sh@367 -- # return 0 00:06:18.568 07:33:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.568 07:33:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.568 --rc genhtml_branch_coverage=1 00:06:18.568 --rc genhtml_function_coverage=1 00:06:18.568 --rc genhtml_legend=1 00:06:18.568 --rc geninfo_all_blocks=1 00:06:18.568 --rc geninfo_unexecuted_blocks=1 00:06:18.568 00:06:18.568 ' 00:06:18.568 07:33:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.568 --rc genhtml_branch_coverage=1 00:06:18.568 --rc genhtml_function_coverage=1 00:06:18.568 --rc genhtml_legend=1 00:06:18.568 --rc geninfo_all_blocks=1 00:06:18.568 --rc geninfo_unexecuted_blocks=1 00:06:18.568 00:06:18.568 ' 00:06:18.568 07:33:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.568 --rc genhtml_branch_coverage=1 00:06:18.568 --rc genhtml_function_coverage=1 00:06:18.568 --rc genhtml_legend=1 00:06:18.568 --rc geninfo_all_blocks=1 00:06:18.568 --rc geninfo_unexecuted_blocks=1 00:06:18.568 00:06:18.568 ' 00:06:18.568 07:33:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:18.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.568 --rc genhtml_branch_coverage=1 00:06:18.568 --rc genhtml_function_coverage=1 00:06:18.568 --rc genhtml_legend=1 00:06:18.568 --rc geninfo_all_blocks=1 00:06:18.568 --rc geninfo_unexecuted_blocks=1 00:06:18.568 00:06:18.568 ' 00:06:18.568 07:33:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.568 07:33:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.568 07:33:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.568 07:33:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.568 07:33:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.568 07:33:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.568 07:33:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.568 07:33:44 -- paths/export.sh@5 -- # export PATH 00:06:18.568 07:33:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:18.569 07:33:44 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:18.569 07:33:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.569 07:33:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.569 07:33:44 -- common/autotest_common.sh@10 -- # set +x 00:06:18.851 ************************************ 00:06:18.851 START TEST dd_inflate_file 00:06:18.851 ************************************ 00:06:18.852 07:33:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:18.852 [2024-12-02 07:33:44.242197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.852 [2024-12-02 07:33:44.242470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58741 ] 00:06:18.852 [2024-12-02 07:33:44.379820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.852 [2024-12-02 07:33:44.449807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.124  [2024-12-02T07:33:45.007Z] Copying: 64/64 [MB] (average 1600 MBps) 00:06:19.383 00:06:19.383 00:06:19.383 real 0m0.568s 00:06:19.383 user 0m0.312s 00:06:19.383 sys 0m0.139s 00:06:19.383 07:33:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.383 ************************************ 00:06:19.383 07:33:44 -- common/autotest_common.sh@10 -- # set +x 00:06:19.383 END TEST dd_inflate_file 00:06:19.383 ************************************ 00:06:19.383 07:33:44 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:19.383 07:33:44 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:19.383 07:33:44 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:19.383 07:33:44 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:19.383 07:33:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:19.383 07:33:44 -- dd/common.sh@31 -- # xtrace_disable 00:06:19.383 07:33:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.383 07:33:44 -- common/autotest_common.sh@10 -- # set +x 00:06:19.383 07:33:44 -- common/autotest_common.sh@10 -- # set +x 00:06:19.383 ************************************ 00:06:19.383 START TEST dd_copy_to_out_bdev 00:06:19.383 ************************************ 00:06:19.383 07:33:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:19.383 { 00:06:19.383 "subsystems": [ 00:06:19.383 { 00:06:19.383 "subsystem": "bdev", 00:06:19.383 "config": [ 00:06:19.383 { 00:06:19.383 "params": { 00:06:19.383 "trtype": "pcie", 00:06:19.383 "traddr": "0000:00:06.0", 00:06:19.383 "name": "Nvme0" 00:06:19.383 }, 00:06:19.383 "method": "bdev_nvme_attach_controller" 00:06:19.383 }, 00:06:19.383 { 00:06:19.383 "params": { 00:06:19.383 "trtype": "pcie", 00:06:19.383 "traddr": "0000:00:07.0", 00:06:19.383 "name": "Nvme1" 00:06:19.383 }, 00:06:19.383 "method": "bdev_nvme_attach_controller" 00:06:19.383 }, 00:06:19.383 { 00:06:19.383 "method": "bdev_wait_for_examine" 00:06:19.383 } 00:06:19.383 ] 00:06:19.383 } 00:06:19.383 ] 00:06:19.383 } 00:06:19.383 [2024-12-02 07:33:44.875048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.383 [2024-12-02 07:33:44.875143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:06:19.643 [2024-12-02 07:33:45.012039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.643 [2024-12-02 07:33:45.064847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.022  [2024-12-02T07:33:46.646Z] Copying: 45/64 [MB] (45 MBps) [2024-12-02T07:33:46.905Z] Copying: 64/64 [MB] (average 45 MBps) 00:06:21.281 00:06:21.281 00:06:21.281 real 0m2.021s 00:06:21.281 user 0m1.799s 00:06:21.281 sys 0m0.153s 00:06:21.281 07:33:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.281 07:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.281 ************************************ 00:06:21.281 END TEST dd_copy_to_out_bdev 00:06:21.281 ************************************ 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:21.281 07:33:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.281 07:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.281 07:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.281 ************************************ 00:06:21.281 START TEST dd_offset_magic 00:06:21.281 ************************************ 00:06:21.281 07:33:46 -- common/autotest_common.sh@1114 -- # offset_magic 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:21.281 07:33:46 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:21.281 07:33:46 -- dd/common.sh@31 -- # xtrace_disable 00:06:21.281 07:33:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.540 [2024-12-02 07:33:46.942952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.541 [2024-12-02 07:33:46.943019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:06:21.541 { 00:06:21.541 "subsystems": [ 00:06:21.541 { 00:06:21.541 "subsystem": "bdev", 00:06:21.541 "config": [ 00:06:21.541 { 00:06:21.541 "params": { 00:06:21.541 "trtype": "pcie", 00:06:21.541 "traddr": "0000:00:06.0", 00:06:21.541 "name": "Nvme0" 00:06:21.541 }, 00:06:21.541 "method": "bdev_nvme_attach_controller" 00:06:21.541 }, 00:06:21.541 { 00:06:21.541 "params": { 00:06:21.541 "trtype": "pcie", 00:06:21.541 "traddr": "0000:00:07.0", 00:06:21.541 "name": "Nvme1" 00:06:21.541 }, 00:06:21.541 "method": "bdev_nvme_attach_controller" 00:06:21.541 }, 00:06:21.541 { 00:06:21.541 "method": "bdev_wait_for_examine" 00:06:21.541 } 00:06:21.541 ] 00:06:21.541 } 00:06:21.541 ] 00:06:21.541 } 00:06:21.541 [2024-12-02 07:33:47.078401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.541 [2024-12-02 07:33:47.150153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.800  [2024-12-02T07:33:47.684Z] Copying: 65/65 [MB] (average 928 MBps) 00:06:22.060 00:06:22.060 07:33:47 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:22.060 07:33:47 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:22.060 07:33:47 -- dd/common.sh@31 -- # xtrace_disable 00:06:22.060 07:33:47 -- common/autotest_common.sh@10 -- # set +x 00:06:22.060 [2024-12-02 07:33:47.662395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.060 [2024-12-02 07:33:47.663282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58831 ] 00:06:22.060 { 00:06:22.060 "subsystems": [ 00:06:22.060 { 00:06:22.060 "subsystem": "bdev", 00:06:22.060 "config": [ 00:06:22.060 { 00:06:22.060 "params": { 00:06:22.060 "trtype": "pcie", 00:06:22.060 "traddr": "0000:00:06.0", 00:06:22.060 "name": "Nvme0" 00:06:22.060 }, 00:06:22.060 "method": "bdev_nvme_attach_controller" 00:06:22.060 }, 00:06:22.060 { 00:06:22.060 "params": { 00:06:22.060 "trtype": "pcie", 00:06:22.060 "traddr": "0000:00:07.0", 00:06:22.060 "name": "Nvme1" 00:06:22.060 }, 00:06:22.060 "method": "bdev_nvme_attach_controller" 00:06:22.060 }, 00:06:22.060 { 00:06:22.060 "method": "bdev_wait_for_examine" 00:06:22.060 } 00:06:22.061 ] 00:06:22.061 } 00:06:22.061 ] 00:06:22.061 } 00:06:22.320 [2024-12-02 07:33:47.799614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.320 [2024-12-02 07:33:47.850117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.579  [2024-12-02T07:33:48.203Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:22.579 00:06:22.838 07:33:48 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:22.839 07:33:48 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:22.839 07:33:48 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:22.839 07:33:48 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:22.839 07:33:48 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:22.839 07:33:48 -- dd/common.sh@31 -- # xtrace_disable 00:06:22.839 07:33:48 -- common/autotest_common.sh@10 -- # set +x 00:06:22.839 [2024-12-02 07:33:48.254157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.839 [2024-12-02 07:33:48.254239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58851 ] 00:06:22.839 { 00:06:22.839 "subsystems": [ 00:06:22.839 { 00:06:22.839 "subsystem": "bdev", 00:06:22.839 "config": [ 00:06:22.839 { 00:06:22.839 "params": { 00:06:22.839 "trtype": "pcie", 00:06:22.839 "traddr": "0000:00:06.0", 00:06:22.839 "name": "Nvme0" 00:06:22.839 }, 00:06:22.839 "method": "bdev_nvme_attach_controller" 00:06:22.839 }, 00:06:22.839 { 00:06:22.839 "params": { 00:06:22.839 "trtype": "pcie", 00:06:22.839 "traddr": "0000:00:07.0", 00:06:22.839 "name": "Nvme1" 00:06:22.839 }, 00:06:22.839 "method": "bdev_nvme_attach_controller" 00:06:22.839 }, 00:06:22.839 { 00:06:22.839 "method": "bdev_wait_for_examine" 00:06:22.839 } 00:06:22.839 ] 00:06:22.839 } 00:06:22.839 ] 00:06:22.839 } 00:06:22.839 [2024-12-02 07:33:48.391149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.839 [2024-12-02 07:33:48.438763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.098  [2024-12-02T07:33:48.981Z] Copying: 65/65 [MB] (average 1048 MBps) 00:06:23.357 00:06:23.357 07:33:48 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:23.357 07:33:48 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:23.357 07:33:48 -- dd/common.sh@31 -- # xtrace_disable 00:06:23.357 07:33:48 -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 [2024-12-02 07:33:48.921393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.357 [2024-12-02 07:33:48.921484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58860 ] 00:06:23.357 { 00:06:23.357 "subsystems": [ 00:06:23.357 { 00:06:23.357 "subsystem": "bdev", 00:06:23.357 "config": [ 00:06:23.357 { 00:06:23.357 "params": { 00:06:23.357 "trtype": "pcie", 00:06:23.357 "traddr": "0000:00:06.0", 00:06:23.357 "name": "Nvme0" 00:06:23.357 }, 00:06:23.357 "method": "bdev_nvme_attach_controller" 00:06:23.357 }, 00:06:23.357 { 00:06:23.357 "params": { 00:06:23.357 "trtype": "pcie", 00:06:23.357 "traddr": "0000:00:07.0", 00:06:23.357 "name": "Nvme1" 00:06:23.357 }, 00:06:23.357 "method": "bdev_nvme_attach_controller" 00:06:23.357 }, 00:06:23.357 { 00:06:23.357 "method": "bdev_wait_for_examine" 00:06:23.357 } 00:06:23.357 ] 00:06:23.357 } 00:06:23.357 ] 00:06:23.357 } 00:06:23.616 [2024-12-02 07:33:49.056818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.616 [2024-12-02 07:33:49.117880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.876  [2024-12-02T07:33:49.500Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:23.876 00:06:23.876 07:33:49 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:23.876 07:33:49 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:23.876 00:06:23.876 real 0m2.567s 00:06:23.876 user 0m1.921s 00:06:23.876 sys 0m0.464s 00:06:23.876 07:33:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.876 ************************************ 00:06:23.876 END TEST dd_offset_magic 00:06:23.876 ************************************ 00:06:23.876 07:33:49 -- common/autotest_common.sh@10 -- # set +x 00:06:24.135 07:33:49 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:24.136 07:33:49 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:24.136 07:33:49 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:24.136 07:33:49 -- dd/common.sh@11 -- # local nvme_ref= 00:06:24.136 07:33:49 -- dd/common.sh@12 -- # local size=4194330 00:06:24.136 07:33:49 -- dd/common.sh@14 -- # local bs=1048576 00:06:24.136 07:33:49 -- dd/common.sh@15 -- # local count=5 00:06:24.136 07:33:49 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:24.136 07:33:49 -- dd/common.sh@18 -- # gen_conf 00:06:24.136 07:33:49 -- dd/common.sh@31 -- # xtrace_disable 00:06:24.136 07:33:49 -- common/autotest_common.sh@10 -- # set +x 00:06:24.136 [2024-12-02 07:33:49.560942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.136 [2024-12-02 07:33:49.561032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:06:24.136 { 00:06:24.136 "subsystems": [ 00:06:24.136 { 00:06:24.136 "subsystem": "bdev", 00:06:24.136 "config": [ 00:06:24.136 { 00:06:24.136 "params": { 00:06:24.136 "trtype": "pcie", 00:06:24.136 "traddr": "0000:00:06.0", 00:06:24.136 "name": "Nvme0" 00:06:24.136 }, 00:06:24.136 "method": "bdev_nvme_attach_controller" 00:06:24.136 }, 00:06:24.136 { 00:06:24.136 "params": { 00:06:24.136 "trtype": "pcie", 00:06:24.136 "traddr": "0000:00:07.0", 00:06:24.136 "name": "Nvme1" 00:06:24.136 }, 00:06:24.136 "method": "bdev_nvme_attach_controller" 00:06:24.136 }, 00:06:24.136 { 00:06:24.136 "method": "bdev_wait_for_examine" 00:06:24.136 } 00:06:24.136 ] 00:06:24.136 } 00:06:24.136 ] 00:06:24.136 } 00:06:24.136 [2024-12-02 07:33:49.698683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.136 [2024-12-02 07:33:49.753816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.395  [2024-12-02T07:33:50.278Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:06:24.654 00:06:24.654 07:33:50 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:24.654 07:33:50 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:24.654 07:33:50 -- dd/common.sh@11 -- # local nvme_ref= 00:06:24.654 07:33:50 -- dd/common.sh@12 -- # local size=4194330 00:06:24.654 07:33:50 -- dd/common.sh@14 -- # local bs=1048576 00:06:24.654 07:33:50 -- dd/common.sh@15 -- # local count=5 00:06:24.654 07:33:50 -- dd/common.sh@18 -- # gen_conf 00:06:24.654 07:33:50 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:24.654 07:33:50 -- dd/common.sh@31 -- # xtrace_disable 00:06:24.654 07:33:50 -- common/autotest_common.sh@10 -- # set +x 00:06:24.654 [2024-12-02 07:33:50.155189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.654 [2024-12-02 07:33:50.155281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58904 ] 00:06:24.654 { 00:06:24.654 "subsystems": [ 00:06:24.654 { 00:06:24.654 "subsystem": "bdev", 00:06:24.654 "config": [ 00:06:24.654 { 00:06:24.654 "params": { 00:06:24.654 "trtype": "pcie", 00:06:24.654 "traddr": "0000:00:06.0", 00:06:24.654 "name": "Nvme0" 00:06:24.654 }, 00:06:24.654 "method": "bdev_nvme_attach_controller" 00:06:24.654 }, 00:06:24.654 { 00:06:24.654 "params": { 00:06:24.654 "trtype": "pcie", 00:06:24.654 "traddr": "0000:00:07.0", 00:06:24.654 "name": "Nvme1" 00:06:24.654 }, 00:06:24.654 "method": "bdev_nvme_attach_controller" 00:06:24.654 }, 00:06:24.654 { 00:06:24.654 "method": "bdev_wait_for_examine" 00:06:24.654 } 00:06:24.654 ] 00:06:24.654 } 00:06:24.654 ] 00:06:24.654 } 00:06:24.913 [2024-12-02 07:33:50.291206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.913 [2024-12-02 07:33:50.342579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.913  [2024-12-02T07:33:50.796Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:25.172 00:06:25.172 07:33:50 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:25.172 00:06:25.172 real 0m6.729s 00:06:25.172 user 0m5.049s 00:06:25.172 sys 0m1.193s 00:06:25.172 07:33:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.172 07:33:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.172 ************************************ 00:06:25.172 END TEST spdk_dd_bdev_to_bdev 00:06:25.172 ************************************ 00:06:25.172 07:33:50 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:25.172 07:33:50 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:25.172 07:33:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.172 07:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.172 07:33:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.172 ************************************ 00:06:25.172 START TEST spdk_dd_uring 00:06:25.172 ************************************ 00:06:25.172 07:33:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:25.432 * Looking for test storage... 00:06:25.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:25.433 07:33:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:25.433 07:33:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:25.433 07:33:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:25.433 07:33:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:25.433 07:33:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:25.433 07:33:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:25.433 07:33:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:25.433 07:33:50 -- scripts/common.sh@335 -- # IFS=.-: 00:06:25.433 07:33:50 -- scripts/common.sh@335 -- # read -ra ver1 00:06:25.433 07:33:50 -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.433 07:33:50 -- scripts/common.sh@336 -- # read -ra ver2 00:06:25.433 07:33:50 -- scripts/common.sh@337 -- # local 'op=<' 00:06:25.433 07:33:50 -- scripts/common.sh@339 -- # ver1_l=2 00:06:25.433 07:33:50 -- scripts/common.sh@340 -- # ver2_l=1 00:06:25.433 07:33:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:25.433 07:33:50 -- scripts/common.sh@343 -- # case "$op" in 00:06:25.433 07:33:50 -- scripts/common.sh@344 -- # : 1 00:06:25.433 07:33:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:25.433 07:33:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.433 07:33:50 -- scripts/common.sh@364 -- # decimal 1 00:06:25.433 07:33:50 -- scripts/common.sh@352 -- # local d=1 00:06:25.433 07:33:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.433 07:33:50 -- scripts/common.sh@354 -- # echo 1 00:06:25.433 07:33:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:25.433 07:33:50 -- scripts/common.sh@365 -- # decimal 2 00:06:25.433 07:33:50 -- scripts/common.sh@352 -- # local d=2 00:06:25.433 07:33:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.433 07:33:50 -- scripts/common.sh@354 -- # echo 2 00:06:25.433 07:33:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:25.433 07:33:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:25.433 07:33:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:25.433 07:33:50 -- scripts/common.sh@367 -- # return 0 00:06:25.433 07:33:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.433 07:33:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.433 --rc genhtml_branch_coverage=1 00:06:25.433 --rc genhtml_function_coverage=1 00:06:25.433 --rc genhtml_legend=1 00:06:25.433 --rc geninfo_all_blocks=1 00:06:25.433 --rc geninfo_unexecuted_blocks=1 00:06:25.433 00:06:25.433 ' 00:06:25.433 07:33:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.433 --rc genhtml_branch_coverage=1 00:06:25.433 --rc genhtml_function_coverage=1 00:06:25.433 --rc genhtml_legend=1 00:06:25.433 --rc geninfo_all_blocks=1 00:06:25.433 --rc geninfo_unexecuted_blocks=1 00:06:25.433 00:06:25.433 ' 00:06:25.433 07:33:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.433 --rc genhtml_branch_coverage=1 00:06:25.433 --rc genhtml_function_coverage=1 00:06:25.433 --rc genhtml_legend=1 00:06:25.433 --rc geninfo_all_blocks=1 00:06:25.433 --rc geninfo_unexecuted_blocks=1 00:06:25.433 00:06:25.433 ' 00:06:25.433 07:33:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.433 --rc genhtml_branch_coverage=1 00:06:25.433 --rc genhtml_function_coverage=1 00:06:25.433 --rc genhtml_legend=1 00:06:25.433 --rc geninfo_all_blocks=1 00:06:25.433 --rc geninfo_unexecuted_blocks=1 00:06:25.433 00:06:25.433 ' 00:06:25.433 07:33:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.433 07:33:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.433 07:33:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.433 07:33:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.433 07:33:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.433 07:33:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.433 07:33:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.433 07:33:50 -- paths/export.sh@5 -- # export PATH 00:06:25.433 07:33:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.433 07:33:50 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:25.433 07:33:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.433 07:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.433 07:33:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.433 ************************************ 00:06:25.433 START TEST dd_uring_copy 00:06:25.433 ************************************ 00:06:25.433 07:33:50 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:06:25.433 07:33:50 -- dd/uring.sh@15 -- # local zram_dev_id 00:06:25.433 07:33:50 -- dd/uring.sh@16 -- # local magic 00:06:25.433 07:33:50 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:25.433 07:33:50 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:25.433 07:33:50 -- dd/uring.sh@19 -- # local verify_magic 00:06:25.433 07:33:50 -- dd/uring.sh@21 -- # init_zram 00:06:25.433 07:33:50 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:25.433 07:33:50 -- dd/common.sh@164 -- # return 00:06:25.433 07:33:50 -- dd/uring.sh@22 -- # create_zram_dev 00:06:25.433 07:33:50 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:25.433 07:33:50 -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:25.433 07:33:50 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:25.433 07:33:50 -- dd/common.sh@181 -- # local id=1 00:06:25.433 07:33:50 -- dd/common.sh@182 -- # local size=512M 00:06:25.433 07:33:50 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:25.433 07:33:50 -- dd/common.sh@186 -- # echo 512M 00:06:25.433 07:33:50 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:25.433 07:33:50 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:25.433 07:33:50 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:25.433 07:33:50 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:25.434 07:33:50 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:25.434 07:33:50 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:25.434 07:33:50 -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:25.434 07:33:50 -- dd/common.sh@98 -- # xtrace_disable 00:06:25.434 07:33:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.434 07:33:50 -- dd/uring.sh@41 -- # magic=r37zv35svp0al8gwhi1g3ee1x35zucj6aetxhyxmp5rtyepvw6w1v6vtlzar567yng88crk8kudgw0pmv6owsxv1jja8wesx0uivwutm0hgbbs2do0gf1l8ny84gw1azhogculdn0ucjeqq33rkn3t9di1pn0pkutms502n7cr5bidxlqmd8maj3mimfujp3igebrr9hhwgfrxf85pmuzcj0zlz05gmc2u79d5sm2ssh47y1m4izrnncpahduh6iigm8po9aapck8hmukeznym4iyvyz21yjv7geg2rx8s1ukbevmhokx12nijn7rf66abv3abjd9qklyncsvyeoblanfofjeudoldqgqtb4lhr651ywvrcryx8cnexwsjnwk7tdw12rb2utcnolp049wj04aejtvkyjugjc1lycqmntdfs96c8uwus7s6ppe3rzxxnurldvvfsxymze36np4pfycobbbxfxkqcc7xxt3dkb935rsfyj03fpdq5557r7ub5938kbcku2tead9kq3xsopbvmrqy1zsogthagjd4k5fqkwi0f0f2ecpx7jw2m21ektz894aocxmu5jw02ti6gpqk3nqh1lvyldx2o81lyw3s35v7d72wo4v1z05d7ko2v980sgksgqq6engfovm0a0j4houc0q1lfbfe3k040yxox6f2tfoy3s6ihljn2c2qztwh1nx08bl2vqwtmtg3ig9rribmz9ntqqq8c82lmaaq47rs96kcta5u82uyh2r4w39xa1pxbuugifv5tvny3wi29q118fam1gj7sfxqcsodzkjxbflugtaaxrkjs3sokmkaop3i230iuvnm13qrgt01znkxihc7s3oc6201jd3ad5eleldo1s529imtwx4p6yhn0luvco1hur836t0vnhraul3jk3ofjtvyaew2g12zoaxxpt2670k680ef1papn21cg4b5rj28l06dzbmt84gfovoviaynsja0d8xsqgukyi3pkp2mg784b4rftr 00:06:25.434 07:33:50 -- dd/uring.sh@42 -- # echo r37zv35svp0al8gwhi1g3ee1x35zucj6aetxhyxmp5rtyepvw6w1v6vtlzar567yng88crk8kudgw0pmv6owsxv1jja8wesx0uivwutm0hgbbs2do0gf1l8ny84gw1azhogculdn0ucjeqq33rkn3t9di1pn0pkutms502n7cr5bidxlqmd8maj3mimfujp3igebrr9hhwgfrxf85pmuzcj0zlz05gmc2u79d5sm2ssh47y1m4izrnncpahduh6iigm8po9aapck8hmukeznym4iyvyz21yjv7geg2rx8s1ukbevmhokx12nijn7rf66abv3abjd9qklyncsvyeoblanfofjeudoldqgqtb4lhr651ywvrcryx8cnexwsjnwk7tdw12rb2utcnolp049wj04aejtvkyjugjc1lycqmntdfs96c8uwus7s6ppe3rzxxnurldvvfsxymze36np4pfycobbbxfxkqcc7xxt3dkb935rsfyj03fpdq5557r7ub5938kbcku2tead9kq3xsopbvmrqy1zsogthagjd4k5fqkwi0f0f2ecpx7jw2m21ektz894aocxmu5jw02ti6gpqk3nqh1lvyldx2o81lyw3s35v7d72wo4v1z05d7ko2v980sgksgqq6engfovm0a0j4houc0q1lfbfe3k040yxox6f2tfoy3s6ihljn2c2qztwh1nx08bl2vqwtmtg3ig9rribmz9ntqqq8c82lmaaq47rs96kcta5u82uyh2r4w39xa1pxbuugifv5tvny3wi29q118fam1gj7sfxqcsodzkjxbflugtaaxrkjs3sokmkaop3i230iuvnm13qrgt01znkxihc7s3oc6201jd3ad5eleldo1s529imtwx4p6yhn0luvco1hur836t0vnhraul3jk3ofjtvyaew2g12zoaxxpt2670k680ef1papn21cg4b5rj28l06dzbmt84gfovoviaynsja0d8xsqgukyi3pkp2mg784b4rftr 00:06:25.434 07:33:50 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:25.434 [2024-12-02 07:33:51.038262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.434 [2024-12-02 07:33:51.038376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:06:25.693 [2024-12-02 07:33:51.177869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.693 [2024-12-02 07:33:51.248312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.260  [2024-12-02T07:33:52.143Z] Copying: 511/511 [MB] (average 1939 MBps) 00:06:26.519 00:06:26.519 07:33:51 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:26.519 07:33:51 -- dd/uring.sh@54 -- # gen_conf 00:06:26.519 07:33:51 -- dd/common.sh@31 -- # xtrace_disable 00:06:26.519 07:33:51 -- common/autotest_common.sh@10 -- # set +x 00:06:26.519 [2024-12-02 07:33:51.956611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.519 [2024-12-02 07:33:51.956723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58994 ] 00:06:26.519 { 00:06:26.519 "subsystems": [ 00:06:26.519 { 00:06:26.519 "subsystem": "bdev", 00:06:26.519 "config": [ 00:06:26.519 { 00:06:26.519 "params": { 00:06:26.519 "block_size": 512, 00:06:26.519 "num_blocks": 1048576, 00:06:26.519 "name": "malloc0" 00:06:26.519 }, 00:06:26.519 "method": "bdev_malloc_create" 00:06:26.519 }, 00:06:26.519 { 00:06:26.519 "params": { 00:06:26.519 "filename": "/dev/zram1", 00:06:26.519 "name": "uring0" 00:06:26.519 }, 00:06:26.519 "method": "bdev_uring_create" 00:06:26.519 }, 00:06:26.519 { 00:06:26.519 "method": "bdev_wait_for_examine" 00:06:26.519 } 00:06:26.519 ] 00:06:26.519 } 00:06:26.519 ] 00:06:26.519 } 00:06:26.519 [2024-12-02 07:33:52.087905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.779 [2024-12-02 07:33:52.149891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.715  [2024-12-02T07:33:54.717Z] Copying: 237/512 [MB] (237 MBps) [2024-12-02T07:33:54.717Z] Copying: 469/512 [MB] (232 MBps) [2024-12-02T07:33:54.975Z] Copying: 512/512 [MB] (average 233 MBps) 00:06:29.351 00:06:29.351 07:33:54 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:29.351 07:33:54 -- dd/uring.sh@60 -- # gen_conf 00:06:29.351 07:33:54 -- dd/common.sh@31 -- # xtrace_disable 00:06:29.351 07:33:54 -- common/autotest_common.sh@10 -- # set +x 00:06:29.351 [2024-12-02 07:33:54.797472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.351 [2024-12-02 07:33:54.797563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:06:29.351 { 00:06:29.351 "subsystems": [ 00:06:29.351 { 00:06:29.351 "subsystem": "bdev", 00:06:29.351 "config": [ 00:06:29.351 { 00:06:29.351 "params": { 00:06:29.351 "block_size": 512, 00:06:29.351 "num_blocks": 1048576, 00:06:29.351 "name": "malloc0" 00:06:29.351 }, 00:06:29.351 "method": "bdev_malloc_create" 00:06:29.351 }, 00:06:29.351 { 00:06:29.351 "params": { 00:06:29.351 "filename": "/dev/zram1", 00:06:29.351 "name": "uring0" 00:06:29.351 }, 00:06:29.351 "method": "bdev_uring_create" 00:06:29.351 }, 00:06:29.351 { 00:06:29.351 "method": "bdev_wait_for_examine" 00:06:29.352 } 00:06:29.352 ] 00:06:29.352 } 00:06:29.352 ] 00:06:29.352 } 00:06:29.352 [2024-12-02 07:33:54.933465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.610 [2024-12-02 07:33:54.982250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.546  [2024-12-02T07:33:57.550Z] Copying: 132/512 [MB] (132 MBps) [2024-12-02T07:33:58.118Z] Copying: 267/512 [MB] (135 MBps) [2024-12-02T07:33:59.055Z] Copying: 406/512 [MB] (139 MBps) [2024-12-02T07:33:59.055Z] Copying: 512/512 [MB] (average 138 MBps) 00:06:33.431 00:06:33.691 07:33:59 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:33.691 07:33:59 -- dd/uring.sh@66 -- # [[ r37zv35svp0al8gwhi1g3ee1x35zucj6aetxhyxmp5rtyepvw6w1v6vtlzar567yng88crk8kudgw0pmv6owsxv1jja8wesx0uivwutm0hgbbs2do0gf1l8ny84gw1azhogculdn0ucjeqq33rkn3t9di1pn0pkutms502n7cr5bidxlqmd8maj3mimfujp3igebrr9hhwgfrxf85pmuzcj0zlz05gmc2u79d5sm2ssh47y1m4izrnncpahduh6iigm8po9aapck8hmukeznym4iyvyz21yjv7geg2rx8s1ukbevmhokx12nijn7rf66abv3abjd9qklyncsvyeoblanfofjeudoldqgqtb4lhr651ywvrcryx8cnexwsjnwk7tdw12rb2utcnolp049wj04aejtvkyjugjc1lycqmntdfs96c8uwus7s6ppe3rzxxnurldvvfsxymze36np4pfycobbbxfxkqcc7xxt3dkb935rsfyj03fpdq5557r7ub5938kbcku2tead9kq3xsopbvmrqy1zsogthagjd4k5fqkwi0f0f2ecpx7jw2m21ektz894aocxmu5jw02ti6gpqk3nqh1lvyldx2o81lyw3s35v7d72wo4v1z05d7ko2v980sgksgqq6engfovm0a0j4houc0q1lfbfe3k040yxox6f2tfoy3s6ihljn2c2qztwh1nx08bl2vqwtmtg3ig9rribmz9ntqqq8c82lmaaq47rs96kcta5u82uyh2r4w39xa1pxbuugifv5tvny3wi29q118fam1gj7sfxqcsodzkjxbflugtaaxrkjs3sokmkaop3i230iuvnm13qrgt01znkxihc7s3oc6201jd3ad5eleldo1s529imtwx4p6yhn0luvco1hur836t0vnhraul3jk3ofjtvyaew2g12zoaxxpt2670k680ef1papn21cg4b5rj28l06dzbmt84gfovoviaynsja0d8xsqgukyi3pkp2mg784b4rftr == \r\3\7\z\v\3\5\s\v\p\0\a\l\8\g\w\h\i\1\g\3\e\e\1\x\3\5\z\u\c\j\6\a\e\t\x\h\y\x\m\p\5\r\t\y\e\p\v\w\6\w\1\v\6\v\t\l\z\a\r\5\6\7\y\n\g\8\8\c\r\k\8\k\u\d\g\w\0\p\m\v\6\o\w\s\x\v\1\j\j\a\8\w\e\s\x\0\u\i\v\w\u\t\m\0\h\g\b\b\s\2\d\o\0\g\f\1\l\8\n\y\8\4\g\w\1\a\z\h\o\g\c\u\l\d\n\0\u\c\j\e\q\q\3\3\r\k\n\3\t\9\d\i\1\p\n\0\p\k\u\t\m\s\5\0\2\n\7\c\r\5\b\i\d\x\l\q\m\d\8\m\a\j\3\m\i\m\f\u\j\p\3\i\g\e\b\r\r\9\h\h\w\g\f\r\x\f\8\5\p\m\u\z\c\j\0\z\l\z\0\5\g\m\c\2\u\7\9\d\5\s\m\2\s\s\h\4\7\y\1\m\4\i\z\r\n\n\c\p\a\h\d\u\h\6\i\i\g\m\8\p\o\9\a\a\p\c\k\8\h\m\u\k\e\z\n\y\m\4\i\y\v\y\z\2\1\y\j\v\7\g\e\g\2\r\x\8\s\1\u\k\b\e\v\m\h\o\k\x\1\2\n\i\j\n\7\r\f\6\6\a\b\v\3\a\b\j\d\9\q\k\l\y\n\c\s\v\y\e\o\b\l\a\n\f\o\f\j\e\u\d\o\l\d\q\g\q\t\b\4\l\h\r\6\5\1\y\w\v\r\c\r\y\x\8\c\n\e\x\w\s\j\n\w\k\7\t\d\w\1\2\r\b\2\u\t\c\n\o\l\p\0\4\9\w\j\0\4\a\e\j\t\v\k\y\j\u\g\j\c\1\l\y\c\q\m\n\t\d\f\s\9\6\c\8\u\w\u\s\7\s\6\p\p\e\3\r\z\x\x\n\u\r\l\d\v\v\f\s\x\y\m\z\e\3\6\n\p\4\p\f\y\c\o\b\b\b\x\f\x\k\q\c\c\7\x\x\t\3\d\k\b\9\3\5\r\s\f\y\j\0\3\f\p\d\q\5\5\5\7\r\7\u\b\5\9\3\8\k\b\c\k\u\2\t\e\a\d\9\k\q\3\x\s\o\p\b\v\m\r\q\y\1\z\s\o\g\t\h\a\g\j\d\4\k\5\f\q\k\w\i\0\f\0\f\2\e\c\p\x\7\j\w\2\m\2\1\e\k\t\z\8\9\4\a\o\c\x\m\u\5\j\w\0\2\t\i\6\g\p\q\k\3\n\q\h\1\l\v\y\l\d\x\2\o\8\1\l\y\w\3\s\3\5\v\7\d\7\2\w\o\4\v\1\z\0\5\d\7\k\o\2\v\9\8\0\s\g\k\s\g\q\q\6\e\n\g\f\o\v\m\0\a\0\j\4\h\o\u\c\0\q\1\l\f\b\f\e\3\k\0\4\0\y\x\o\x\6\f\2\t\f\o\y\3\s\6\i\h\l\j\n\2\c\2\q\z\t\w\h\1\n\x\0\8\b\l\2\v\q\w\t\m\t\g\3\i\g\9\r\r\i\b\m\z\9\n\t\q\q\q\8\c\8\2\l\m\a\a\q\4\7\r\s\9\6\k\c\t\a\5\u\8\2\u\y\h\2\r\4\w\3\9\x\a\1\p\x\b\u\u\g\i\f\v\5\t\v\n\y\3\w\i\2\9\q\1\1\8\f\a\m\1\g\j\7\s\f\x\q\c\s\o\d\z\k\j\x\b\f\l\u\g\t\a\a\x\r\k\j\s\3\s\o\k\m\k\a\o\p\3\i\2\3\0\i\u\v\n\m\1\3\q\r\g\t\0\1\z\n\k\x\i\h\c\7\s\3\o\c\6\2\0\1\j\d\3\a\d\5\e\l\e\l\d\o\1\s\5\2\9\i\m\t\w\x\4\p\6\y\h\n\0\l\u\v\c\o\1\h\u\r\8\3\6\t\0\v\n\h\r\a\u\l\3\j\k\3\o\f\j\t\v\y\a\e\w\2\g\1\2\z\o\a\x\x\p\t\2\6\7\0\k\6\8\0\e\f\1\p\a\p\n\2\1\c\g\4\b\5\r\j\2\8\l\0\6\d\z\b\m\t\8\4\g\f\o\v\o\v\i\a\y\n\s\j\a\0\d\8\x\s\q\g\u\k\y\i\3\p\k\p\2\m\g\7\8\4\b\4\r\f\t\r ]] 00:06:33.691 07:33:59 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:33.691 07:33:59 -- dd/uring.sh@69 -- # [[ r37zv35svp0al8gwhi1g3ee1x35zucj6aetxhyxmp5rtyepvw6w1v6vtlzar567yng88crk8kudgw0pmv6owsxv1jja8wesx0uivwutm0hgbbs2do0gf1l8ny84gw1azhogculdn0ucjeqq33rkn3t9di1pn0pkutms502n7cr5bidxlqmd8maj3mimfujp3igebrr9hhwgfrxf85pmuzcj0zlz05gmc2u79d5sm2ssh47y1m4izrnncpahduh6iigm8po9aapck8hmukeznym4iyvyz21yjv7geg2rx8s1ukbevmhokx12nijn7rf66abv3abjd9qklyncsvyeoblanfofjeudoldqgqtb4lhr651ywvrcryx8cnexwsjnwk7tdw12rb2utcnolp049wj04aejtvkyjugjc1lycqmntdfs96c8uwus7s6ppe3rzxxnurldvvfsxymze36np4pfycobbbxfxkqcc7xxt3dkb935rsfyj03fpdq5557r7ub5938kbcku2tead9kq3xsopbvmrqy1zsogthagjd4k5fqkwi0f0f2ecpx7jw2m21ektz894aocxmu5jw02ti6gpqk3nqh1lvyldx2o81lyw3s35v7d72wo4v1z05d7ko2v980sgksgqq6engfovm0a0j4houc0q1lfbfe3k040yxox6f2tfoy3s6ihljn2c2qztwh1nx08bl2vqwtmtg3ig9rribmz9ntqqq8c82lmaaq47rs96kcta5u82uyh2r4w39xa1pxbuugifv5tvny3wi29q118fam1gj7sfxqcsodzkjxbflugtaaxrkjs3sokmkaop3i230iuvnm13qrgt01znkxihc7s3oc6201jd3ad5eleldo1s529imtwx4p6yhn0luvco1hur836t0vnhraul3jk3ofjtvyaew2g12zoaxxpt2670k680ef1papn21cg4b5rj28l06dzbmt84gfovoviaynsja0d8xsqgukyi3pkp2mg784b4rftr == \r\3\7\z\v\3\5\s\v\p\0\a\l\8\g\w\h\i\1\g\3\e\e\1\x\3\5\z\u\c\j\6\a\e\t\x\h\y\x\m\p\5\r\t\y\e\p\v\w\6\w\1\v\6\v\t\l\z\a\r\5\6\7\y\n\g\8\8\c\r\k\8\k\u\d\g\w\0\p\m\v\6\o\w\s\x\v\1\j\j\a\8\w\e\s\x\0\u\i\v\w\u\t\m\0\h\g\b\b\s\2\d\o\0\g\f\1\l\8\n\y\8\4\g\w\1\a\z\h\o\g\c\u\l\d\n\0\u\c\j\e\q\q\3\3\r\k\n\3\t\9\d\i\1\p\n\0\p\k\u\t\m\s\5\0\2\n\7\c\r\5\b\i\d\x\l\q\m\d\8\m\a\j\3\m\i\m\f\u\j\p\3\i\g\e\b\r\r\9\h\h\w\g\f\r\x\f\8\5\p\m\u\z\c\j\0\z\l\z\0\5\g\m\c\2\u\7\9\d\5\s\m\2\s\s\h\4\7\y\1\m\4\i\z\r\n\n\c\p\a\h\d\u\h\6\i\i\g\m\8\p\o\9\a\a\p\c\k\8\h\m\u\k\e\z\n\y\m\4\i\y\v\y\z\2\1\y\j\v\7\g\e\g\2\r\x\8\s\1\u\k\b\e\v\m\h\o\k\x\1\2\n\i\j\n\7\r\f\6\6\a\b\v\3\a\b\j\d\9\q\k\l\y\n\c\s\v\y\e\o\b\l\a\n\f\o\f\j\e\u\d\o\l\d\q\g\q\t\b\4\l\h\r\6\5\1\y\w\v\r\c\r\y\x\8\c\n\e\x\w\s\j\n\w\k\7\t\d\w\1\2\r\b\2\u\t\c\n\o\l\p\0\4\9\w\j\0\4\a\e\j\t\v\k\y\j\u\g\j\c\1\l\y\c\q\m\n\t\d\f\s\9\6\c\8\u\w\u\s\7\s\6\p\p\e\3\r\z\x\x\n\u\r\l\d\v\v\f\s\x\y\m\z\e\3\6\n\p\4\p\f\y\c\o\b\b\b\x\f\x\k\q\c\c\7\x\x\t\3\d\k\b\9\3\5\r\s\f\y\j\0\3\f\p\d\q\5\5\5\7\r\7\u\b\5\9\3\8\k\b\c\k\u\2\t\e\a\d\9\k\q\3\x\s\o\p\b\v\m\r\q\y\1\z\s\o\g\t\h\a\g\j\d\4\k\5\f\q\k\w\i\0\f\0\f\2\e\c\p\x\7\j\w\2\m\2\1\e\k\t\z\8\9\4\a\o\c\x\m\u\5\j\w\0\2\t\i\6\g\p\q\k\3\n\q\h\1\l\v\y\l\d\x\2\o\8\1\l\y\w\3\s\3\5\v\7\d\7\2\w\o\4\v\1\z\0\5\d\7\k\o\2\v\9\8\0\s\g\k\s\g\q\q\6\e\n\g\f\o\v\m\0\a\0\j\4\h\o\u\c\0\q\1\l\f\b\f\e\3\k\0\4\0\y\x\o\x\6\f\2\t\f\o\y\3\s\6\i\h\l\j\n\2\c\2\q\z\t\w\h\1\n\x\0\8\b\l\2\v\q\w\t\m\t\g\3\i\g\9\r\r\i\b\m\z\9\n\t\q\q\q\8\c\8\2\l\m\a\a\q\4\7\r\s\9\6\k\c\t\a\5\u\8\2\u\y\h\2\r\4\w\3\9\x\a\1\p\x\b\u\u\g\i\f\v\5\t\v\n\y\3\w\i\2\9\q\1\1\8\f\a\m\1\g\j\7\s\f\x\q\c\s\o\d\z\k\j\x\b\f\l\u\g\t\a\a\x\r\k\j\s\3\s\o\k\m\k\a\o\p\3\i\2\3\0\i\u\v\n\m\1\3\q\r\g\t\0\1\z\n\k\x\i\h\c\7\s\3\o\c\6\2\0\1\j\d\3\a\d\5\e\l\e\l\d\o\1\s\5\2\9\i\m\t\w\x\4\p\6\y\h\n\0\l\u\v\c\o\1\h\u\r\8\3\6\t\0\v\n\h\r\a\u\l\3\j\k\3\o\f\j\t\v\y\a\e\w\2\g\1\2\z\o\a\x\x\p\t\2\6\7\0\k\6\8\0\e\f\1\p\a\p\n\2\1\c\g\4\b\5\r\j\2\8\l\0\6\d\z\b\m\t\8\4\g\f\o\v\o\v\i\a\y\n\s\j\a\0\d\8\x\s\q\g\u\k\y\i\3\p\k\p\2\m\g\7\8\4\b\4\r\f\t\r ]] 00:06:33.691 07:33:59 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:33.951 07:33:59 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:33.951 07:33:59 -- dd/uring.sh@75 -- # gen_conf 00:06:33.951 07:33:59 -- dd/common.sh@31 -- # xtrace_disable 00:06:33.951 07:33:59 -- common/autotest_common.sh@10 -- # set +x 00:06:33.951 [2024-12-02 07:33:59.447593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.951 [2024-12-02 07:33:59.448387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:06:33.951 { 00:06:33.951 "subsystems": [ 00:06:33.951 { 00:06:33.951 "subsystem": "bdev", 00:06:33.951 "config": [ 00:06:33.951 { 00:06:33.951 "params": { 00:06:33.951 "block_size": 512, 00:06:33.951 "num_blocks": 1048576, 00:06:33.951 "name": "malloc0" 00:06:33.951 }, 00:06:33.951 "method": "bdev_malloc_create" 00:06:33.951 }, 00:06:33.951 { 00:06:33.951 "params": { 00:06:33.951 "filename": "/dev/zram1", 00:06:33.951 "name": "uring0" 00:06:33.951 }, 00:06:33.951 "method": "bdev_uring_create" 00:06:33.951 }, 00:06:33.951 { 00:06:33.951 "method": "bdev_wait_for_examine" 00:06:33.951 } 00:06:33.951 ] 00:06:33.951 } 00:06:33.951 ] 00:06:33.951 } 00:06:34.210 [2024-12-02 07:33:59.579146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.210 [2024-12-02 07:33:59.626936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.149  [2024-12-02T07:34:02.152Z] Copying: 187/512 [MB] (187 MBps) [2024-12-02T07:34:02.720Z] Copying: 374/512 [MB] (186 MBps) [2024-12-02T07:34:02.980Z] Copying: 512/512 [MB] (average 187 MBps) 00:06:37.356 00:06:37.356 07:34:02 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:37.356 07:34:02 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:37.356 07:34:02 -- dd/uring.sh@87 -- # : 00:06:37.356 07:34:02 -- dd/uring.sh@87 -- # : 00:06:37.356 07:34:02 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:37.356 07:34:02 -- dd/uring.sh@87 -- # gen_conf 00:06:37.356 07:34:02 -- dd/common.sh@31 -- # xtrace_disable 00:06:37.356 07:34:02 -- common/autotest_common.sh@10 -- # set +x 00:06:37.356 [2024-12-02 07:34:02.779971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.356 [2024-12-02 07:34:02.780067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59161 ] 00:06:37.356 { 00:06:37.356 "subsystems": [ 00:06:37.356 { 00:06:37.356 "subsystem": "bdev", 00:06:37.356 "config": [ 00:06:37.356 { 00:06:37.356 "params": { 00:06:37.356 "block_size": 512, 00:06:37.356 "num_blocks": 1048576, 00:06:37.356 "name": "malloc0" 00:06:37.356 }, 00:06:37.356 "method": "bdev_malloc_create" 00:06:37.356 }, 00:06:37.356 { 00:06:37.356 "params": { 00:06:37.356 "filename": "/dev/zram1", 00:06:37.356 "name": "uring0" 00:06:37.356 }, 00:06:37.356 "method": "bdev_uring_create" 00:06:37.356 }, 00:06:37.356 { 00:06:37.356 "params": { 00:06:37.356 "name": "uring0" 00:06:37.356 }, 00:06:37.356 "method": "bdev_uring_delete" 00:06:37.356 }, 00:06:37.356 { 00:06:37.356 "method": "bdev_wait_for_examine" 00:06:37.356 } 00:06:37.356 ] 00:06:37.356 } 00:06:37.356 ] 00:06:37.356 } 00:06:37.357 [2024-12-02 07:34:02.917542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.357 [2024-12-02 07:34:02.968863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.617  [2024-12-02T07:34:03.501Z] Copying: 0/0 [B] (average 0 Bps) 00:06:37.877 00:06:37.877 07:34:03 -- dd/uring.sh@94 -- # : 00:06:37.877 07:34:03 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:37.877 07:34:03 -- dd/uring.sh@94 -- # gen_conf 00:06:37.877 07:34:03 -- common/autotest_common.sh@650 -- # local es=0 00:06:37.877 07:34:03 -- dd/common.sh@31 -- # xtrace_disable 00:06:37.877 07:34:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:37.877 07:34:03 -- common/autotest_common.sh@10 -- # set +x 00:06:37.877 07:34:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.877 07:34:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.877 07:34:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.877 07:34:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.877 07:34:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.877 07:34:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.877 07:34:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.877 07:34:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.877 07:34:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:37.877 [2024-12-02 07:34:03.419801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.877 [2024-12-02 07:34:03.419890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:06:37.877 { 00:06:37.877 "subsystems": [ 00:06:37.877 { 00:06:37.877 "subsystem": "bdev", 00:06:37.877 "config": [ 00:06:37.877 { 00:06:37.877 "params": { 00:06:37.877 "block_size": 512, 00:06:37.877 "num_blocks": 1048576, 00:06:37.877 "name": "malloc0" 00:06:37.877 }, 00:06:37.877 "method": "bdev_malloc_create" 00:06:37.877 }, 00:06:37.877 { 00:06:37.877 "params": { 00:06:37.877 "filename": "/dev/zram1", 00:06:37.877 "name": "uring0" 00:06:37.877 }, 00:06:37.877 "method": "bdev_uring_create" 00:06:37.877 }, 00:06:37.877 { 00:06:37.877 "params": { 00:06:37.877 "name": "uring0" 00:06:37.877 }, 00:06:37.877 "method": "bdev_uring_delete" 00:06:37.877 }, 00:06:37.877 { 00:06:37.877 "method": "bdev_wait_for_examine" 00:06:37.877 } 00:06:37.877 ] 00:06:37.877 } 00:06:37.877 ] 00:06:37.877 } 00:06:38.137 [2024-12-02 07:34:03.554931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.137 [2024-12-02 07:34:03.608755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.137 [2024-12-02 07:34:03.749354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:38.137 [2024-12-02 07:34:03.749403] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:38.137 [2024-12-02 07:34:03.749428] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:06:38.137 [2024-12-02 07:34:03.749436] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.396 [2024-12-02 07:34:03.900283] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:38.396 07:34:03 -- common/autotest_common.sh@653 -- # es=237 00:06:38.396 07:34:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.396 07:34:03 -- common/autotest_common.sh@662 -- # es=109 00:06:38.396 07:34:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:38.396 07:34:03 -- common/autotest_common.sh@670 -- # es=1 00:06:38.396 07:34:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.396 07:34:03 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:38.396 07:34:03 -- dd/common.sh@172 -- # local id=1 00:06:38.396 07:34:03 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:06:38.396 07:34:03 -- dd/common.sh@176 -- # echo 1 00:06:38.396 07:34:04 -- dd/common.sh@177 -- # echo 1 00:06:38.655 07:34:04 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:38.655 00:06:38.655 real 0m13.262s 00:06:38.655 user 0m7.519s 00:06:38.655 sys 0m5.202s 00:06:38.656 07:34:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.656 07:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:38.656 ************************************ 00:06:38.656 END TEST dd_uring_copy 00:06:38.656 ************************************ 00:06:38.656 00:06:38.656 real 0m13.504s 00:06:38.656 user 0m7.654s 00:06:38.656 sys 0m5.313s 00:06:38.656 07:34:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.656 07:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:38.656 ************************************ 00:06:38.656 END TEST spdk_dd_uring 00:06:38.656 ************************************ 00:06:38.916 07:34:04 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:38.916 07:34:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.916 07:34:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.916 07:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:38.916 ************************************ 00:06:38.916 START TEST spdk_dd_sparse 00:06:38.916 ************************************ 00:06:38.916 07:34:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:38.916 * Looking for test storage... 00:06:38.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:38.916 07:34:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:38.916 07:34:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:38.916 07:34:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:38.916 07:34:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:38.916 07:34:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:38.916 07:34:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:38.916 07:34:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:38.916 07:34:04 -- scripts/common.sh@335 -- # IFS=.-: 00:06:38.916 07:34:04 -- scripts/common.sh@335 -- # read -ra ver1 00:06:38.916 07:34:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.916 07:34:04 -- scripts/common.sh@336 -- # read -ra ver2 00:06:38.916 07:34:04 -- scripts/common.sh@337 -- # local 'op=<' 00:06:38.916 07:34:04 -- scripts/common.sh@339 -- # ver1_l=2 00:06:38.916 07:34:04 -- scripts/common.sh@340 -- # ver2_l=1 00:06:38.916 07:34:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:38.916 07:34:04 -- scripts/common.sh@343 -- # case "$op" in 00:06:38.916 07:34:04 -- scripts/common.sh@344 -- # : 1 00:06:38.916 07:34:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:38.916 07:34:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.916 07:34:04 -- scripts/common.sh@364 -- # decimal 1 00:06:38.916 07:34:04 -- scripts/common.sh@352 -- # local d=1 00:06:38.916 07:34:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.916 07:34:04 -- scripts/common.sh@354 -- # echo 1 00:06:38.916 07:34:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:38.916 07:34:04 -- scripts/common.sh@365 -- # decimal 2 00:06:38.916 07:34:04 -- scripts/common.sh@352 -- # local d=2 00:06:38.916 07:34:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.916 07:34:04 -- scripts/common.sh@354 -- # echo 2 00:06:38.916 07:34:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:38.916 07:34:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:38.916 07:34:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:38.916 07:34:04 -- scripts/common.sh@367 -- # return 0 00:06:38.916 07:34:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.916 07:34:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:38.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.916 --rc genhtml_branch_coverage=1 00:06:38.916 --rc genhtml_function_coverage=1 00:06:38.916 --rc genhtml_legend=1 00:06:38.916 --rc geninfo_all_blocks=1 00:06:38.916 --rc geninfo_unexecuted_blocks=1 00:06:38.916 00:06:38.916 ' 00:06:38.916 07:34:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:38.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.916 --rc genhtml_branch_coverage=1 00:06:38.916 --rc genhtml_function_coverage=1 00:06:38.916 --rc genhtml_legend=1 00:06:38.916 --rc geninfo_all_blocks=1 00:06:38.916 --rc geninfo_unexecuted_blocks=1 00:06:38.916 00:06:38.916 ' 00:06:38.916 07:34:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:38.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.916 --rc genhtml_branch_coverage=1 00:06:38.916 --rc genhtml_function_coverage=1 00:06:38.916 --rc genhtml_legend=1 00:06:38.916 --rc geninfo_all_blocks=1 00:06:38.916 --rc geninfo_unexecuted_blocks=1 00:06:38.916 00:06:38.916 ' 00:06:38.916 07:34:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:38.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.916 --rc genhtml_branch_coverage=1 00:06:38.916 --rc genhtml_function_coverage=1 00:06:38.916 --rc genhtml_legend=1 00:06:38.916 --rc geninfo_all_blocks=1 00:06:38.916 --rc geninfo_unexecuted_blocks=1 00:06:38.916 00:06:38.916 ' 00:06:38.916 07:34:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.916 07:34:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.916 07:34:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.916 07:34:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.916 07:34:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.916 07:34:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.917 07:34:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.917 07:34:04 -- paths/export.sh@5 -- # export PATH 00:06:38.917 07:34:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.917 07:34:04 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:38.917 07:34:04 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:38.917 07:34:04 -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:38.917 07:34:04 -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:38.917 07:34:04 -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:38.917 07:34:04 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:38.917 07:34:04 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:38.917 07:34:04 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:38.917 07:34:04 -- dd/sparse.sh@118 -- # prepare 00:06:38.917 07:34:04 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:38.917 07:34:04 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:38.917 1+0 records in 00:06:38.917 1+0 records out 00:06:38.917 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00515643 s, 813 MB/s 00:06:38.917 07:34:04 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:38.917 1+0 records in 00:06:38.917 1+0 records out 00:06:38.917 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00555966 s, 754 MB/s 00:06:38.917 07:34:04 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:38.917 1+0 records in 00:06:38.917 1+0 records out 00:06:38.917 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00615486 s, 681 MB/s 00:06:38.917 07:34:04 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:38.917 07:34:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.917 07:34:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.917 07:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:39.177 ************************************ 00:06:39.177 START TEST dd_sparse_file_to_file 00:06:39.177 ************************************ 00:06:39.177 07:34:04 -- common/autotest_common.sh@1114 -- # file_to_file 00:06:39.177 07:34:04 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:39.177 07:34:04 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:39.177 07:34:04 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:39.177 07:34:04 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:39.177 07:34:04 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:39.177 07:34:04 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:39.177 07:34:04 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:39.177 07:34:04 -- dd/sparse.sh@41 -- # gen_conf 00:06:39.177 07:34:04 -- dd/common.sh@31 -- # xtrace_disable 00:06:39.177 07:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:39.177 [2024-12-02 07:34:04.596061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.177 [2024-12-02 07:34:04.596159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:06:39.177 { 00:06:39.177 "subsystems": [ 00:06:39.177 { 00:06:39.177 "subsystem": "bdev", 00:06:39.177 "config": [ 00:06:39.177 { 00:06:39.177 "params": { 00:06:39.177 "block_size": 4096, 00:06:39.177 "filename": "dd_sparse_aio_disk", 00:06:39.177 "name": "dd_aio" 00:06:39.177 }, 00:06:39.177 "method": "bdev_aio_create" 00:06:39.177 }, 00:06:39.177 { 00:06:39.177 "params": { 00:06:39.177 "lvs_name": "dd_lvstore", 00:06:39.177 "bdev_name": "dd_aio" 00:06:39.177 }, 00:06:39.177 "method": "bdev_lvol_create_lvstore" 00:06:39.177 }, 00:06:39.177 { 00:06:39.177 "method": "bdev_wait_for_examine" 00:06:39.177 } 00:06:39.177 ] 00:06:39.177 } 00:06:39.177 ] 00:06:39.177 } 00:06:39.177 [2024-12-02 07:34:04.731450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.177 [2024-12-02 07:34:04.780464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.437  [2024-12-02T07:34:05.319Z] Copying: 12/36 [MB] (average 1714 MBps) 00:06:39.695 00:06:39.695 07:34:05 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:39.695 07:34:05 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:39.695 07:34:05 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:39.695 07:34:05 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:39.695 07:34:05 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:39.695 07:34:05 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:39.695 07:34:05 -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:39.695 07:34:05 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:39.695 07:34:05 -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:39.695 07:34:05 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:39.695 00:06:39.695 real 0m0.549s 00:06:39.695 user 0m0.331s 00:06:39.695 sys 0m0.132s 00:06:39.695 07:34:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.695 07:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:39.695 ************************************ 00:06:39.695 END TEST dd_sparse_file_to_file 00:06:39.695 ************************************ 00:06:39.695 07:34:05 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:39.695 07:34:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.695 07:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.695 07:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:39.695 ************************************ 00:06:39.695 START TEST dd_sparse_file_to_bdev 00:06:39.695 ************************************ 00:06:39.695 07:34:05 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:06:39.695 07:34:05 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:39.695 07:34:05 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:39.695 07:34:05 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:06:39.695 07:34:05 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:39.695 07:34:05 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:39.695 07:34:05 -- dd/sparse.sh@73 -- # gen_conf 00:06:39.695 07:34:05 -- dd/common.sh@31 -- # xtrace_disable 00:06:39.695 07:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:39.695 [2024-12-02 07:34:05.194253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.695 [2024-12-02 07:34:05.195020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59322 ] 00:06:39.695 { 00:06:39.695 "subsystems": [ 00:06:39.695 { 00:06:39.695 "subsystem": "bdev", 00:06:39.695 "config": [ 00:06:39.695 { 00:06:39.695 "params": { 00:06:39.695 "block_size": 4096, 00:06:39.695 "filename": "dd_sparse_aio_disk", 00:06:39.695 "name": "dd_aio" 00:06:39.695 }, 00:06:39.695 "method": "bdev_aio_create" 00:06:39.695 }, 00:06:39.695 { 00:06:39.695 "params": { 00:06:39.695 "lvs_name": "dd_lvstore", 00:06:39.695 "lvol_name": "dd_lvol", 00:06:39.695 "size": 37748736, 00:06:39.695 "thin_provision": true 00:06:39.695 }, 00:06:39.695 "method": "bdev_lvol_create" 00:06:39.695 }, 00:06:39.695 { 00:06:39.695 "method": "bdev_wait_for_examine" 00:06:39.695 } 00:06:39.695 ] 00:06:39.695 } 00:06:39.695 ] 00:06:39.695 } 00:06:39.953 [2024-12-02 07:34:05.329365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.953 [2024-12-02 07:34:05.385816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.953 [2024-12-02 07:34:05.441411] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:06:39.953  [2024-12-02T07:34:05.577Z] Copying: 12/36 [MB] (average 480 MBps)[2024-12-02 07:34:05.482082] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:06:40.212 00:06:40.212 00:06:40.212 00:06:40.212 real 0m0.536s 00:06:40.212 user 0m0.332s 00:06:40.212 sys 0m0.117s 00:06:40.212 07:34:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.212 07:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.212 ************************************ 00:06:40.212 END TEST dd_sparse_file_to_bdev 00:06:40.212 ************************************ 00:06:40.212 07:34:05 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:40.212 07:34:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.212 07:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.212 07:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.212 ************************************ 00:06:40.212 START TEST dd_sparse_bdev_to_file 00:06:40.212 ************************************ 00:06:40.212 07:34:05 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:06:40.212 07:34:05 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:40.212 07:34:05 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:40.212 07:34:05 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:40.212 07:34:05 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:40.212 07:34:05 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:40.212 07:34:05 -- dd/sparse.sh@91 -- # gen_conf 00:06:40.212 07:34:05 -- dd/common.sh@31 -- # xtrace_disable 00:06:40.212 07:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.212 [2024-12-02 07:34:05.783160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.212 [2024-12-02 07:34:05.783254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:06:40.212 { 00:06:40.212 "subsystems": [ 00:06:40.212 { 00:06:40.212 "subsystem": "bdev", 00:06:40.212 "config": [ 00:06:40.212 { 00:06:40.212 "params": { 00:06:40.212 "block_size": 4096, 00:06:40.212 "filename": "dd_sparse_aio_disk", 00:06:40.212 "name": "dd_aio" 00:06:40.212 }, 00:06:40.212 "method": "bdev_aio_create" 00:06:40.212 }, 00:06:40.212 { 00:06:40.212 "method": "bdev_wait_for_examine" 00:06:40.212 } 00:06:40.212 ] 00:06:40.212 } 00:06:40.212 ] 00:06:40.212 } 00:06:40.471 [2024-12-02 07:34:05.918858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.471 [2024-12-02 07:34:05.971244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.471  [2024-12-02T07:34:06.355Z] Copying: 12/36 [MB] (average 1333 MBps) 00:06:40.731 00:06:40.731 07:34:06 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:40.731 07:34:06 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:40.731 07:34:06 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:40.731 07:34:06 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:40.731 07:34:06 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:40.731 07:34:06 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:40.731 07:34:06 -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:40.731 07:34:06 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:40.731 07:34:06 -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:40.731 07:34:06 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:40.731 00:06:40.731 real 0m0.544s 00:06:40.731 user 0m0.327s 00:06:40.731 sys 0m0.131s 00:06:40.731 07:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.731 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:40.731 ************************************ 00:06:40.731 END TEST dd_sparse_bdev_to_file 00:06:40.731 ************************************ 00:06:40.731 07:34:06 -- dd/sparse.sh@1 -- # cleanup 00:06:40.731 07:34:06 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:40.731 07:34:06 -- dd/sparse.sh@12 -- # rm file_zero1 00:06:40.731 07:34:06 -- dd/sparse.sh@13 -- # rm file_zero2 00:06:40.731 07:34:06 -- dd/sparse.sh@14 -- # rm file_zero3 00:06:40.731 00:06:40.731 real 0m2.012s 00:06:40.731 user 0m1.166s 00:06:40.731 sys 0m0.579s 00:06:40.731 07:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.731 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:40.731 ************************************ 00:06:40.731 END TEST spdk_dd_sparse 00:06:40.731 ************************************ 00:06:40.991 07:34:06 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:40.991 07:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.991 07:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.991 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:40.991 ************************************ 00:06:40.991 START TEST spdk_dd_negative 00:06:40.991 ************************************ 00:06:40.991 07:34:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:40.991 * Looking for test storage... 00:06:40.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:40.991 07:34:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:40.991 07:34:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:40.991 07:34:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.991 07:34:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.991 07:34:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.991 07:34:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.991 07:34:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.991 07:34:06 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.991 07:34:06 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.991 07:34:06 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.991 07:34:06 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.991 07:34:06 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.991 07:34:06 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.991 07:34:06 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.991 07:34:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.991 07:34:06 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.991 07:34:06 -- scripts/common.sh@344 -- # : 1 00:06:40.991 07:34:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.991 07:34:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.991 07:34:06 -- scripts/common.sh@364 -- # decimal 1 00:06:40.991 07:34:06 -- scripts/common.sh@352 -- # local d=1 00:06:40.991 07:34:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.991 07:34:06 -- scripts/common.sh@354 -- # echo 1 00:06:40.991 07:34:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:40.991 07:34:06 -- scripts/common.sh@365 -- # decimal 2 00:06:40.991 07:34:06 -- scripts/common.sh@352 -- # local d=2 00:06:40.991 07:34:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.991 07:34:06 -- scripts/common.sh@354 -- # echo 2 00:06:40.991 07:34:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:40.991 07:34:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:40.991 07:34:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:40.991 07:34:06 -- scripts/common.sh@367 -- # return 0 00:06:40.991 07:34:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.991 07:34:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.991 --rc genhtml_legend=1 00:06:40.991 --rc geninfo_all_blocks=1 00:06:40.991 --rc geninfo_unexecuted_blocks=1 00:06:40.991 00:06:40.991 ' 00:06:40.991 07:34:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.991 --rc genhtml_legend=1 00:06:40.991 --rc geninfo_all_blocks=1 00:06:40.991 --rc geninfo_unexecuted_blocks=1 00:06:40.991 00:06:40.991 ' 00:06:40.991 07:34:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.991 --rc genhtml_legend=1 00:06:40.991 --rc geninfo_all_blocks=1 00:06:40.991 --rc geninfo_unexecuted_blocks=1 00:06:40.991 00:06:40.991 ' 00:06:40.991 07:34:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:40.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.991 --rc genhtml_branch_coverage=1 00:06:40.991 --rc genhtml_function_coverage=1 00:06:40.992 --rc genhtml_legend=1 00:06:40.992 --rc geninfo_all_blocks=1 00:06:40.992 --rc geninfo_unexecuted_blocks=1 00:06:40.992 00:06:40.992 ' 00:06:40.992 07:34:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:40.992 07:34:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.992 07:34:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.992 07:34:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.992 07:34:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.992 07:34:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.992 07:34:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.992 07:34:06 -- paths/export.sh@5 -- # export PATH 00:06:40.992 07:34:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.992 07:34:06 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:40.992 07:34:06 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.992 07:34:06 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:40.992 07:34:06 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.992 07:34:06 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:06:40.992 07:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.992 07:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.992 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:40.992 ************************************ 00:06:40.992 START TEST dd_invalid_arguments 00:06:40.992 ************************************ 00:06:40.992 07:34:06 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:06:40.992 07:34:06 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:40.992 07:34:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:40.992 07:34:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:40.992 07:34:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.992 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.992 07:34:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.992 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.992 07:34:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.992 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.992 07:34:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.992 07:34:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:40.992 07:34:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:41.251 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:41.251 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:41.251 options: 00:06:41.251 -c, --config JSON config file (default none) 00:06:41.251 --json JSON config file (default none) 00:06:41.251 --json-ignore-init-errors 00:06:41.251 don't exit on invalid config entry 00:06:41.251 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:41.251 -g, --single-file-segments 00:06:41.251 force creating just one hugetlbfs file 00:06:41.251 -h, --help show this usage 00:06:41.251 -i, --shm-id shared memory ID (optional) 00:06:41.251 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:41.251 --lcores lcore to CPU mapping list. The list is in the format: 00:06:41.251 [<,lcores[@CPUs]>...] 00:06:41.252 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:41.252 Within the group, '-' is used for range separator, 00:06:41.252 ',' is used for single number separator. 00:06:41.252 '( )' can be omitted for single element group, 00:06:41.252 '@' can be omitted if cpus and lcores have the same value 00:06:41.252 -n, --mem-channels channel number of memory channels used for DPDK 00:06:41.252 -p, --main-core main (primary) core for DPDK 00:06:41.252 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:41.252 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:41.252 --disable-cpumask-locks Disable CPU core lock files. 00:06:41.252 --silence-noticelog disable notice level logging to stderr 00:06:41.252 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:41.252 -u, --no-pci disable PCI access 00:06:41.252 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:41.252 --max-delay maximum reactor delay (in microseconds) 00:06:41.252 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:41.252 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:41.252 -R, --huge-unlink unlink huge files after initialization 00:06:41.252 -v, --version print SPDK version 00:06:41.252 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:41.252 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:41.252 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:41.252 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:41.252 Tracepoints vary in size and can use more than one trace entry. 00:06:41.252 --rpcs-allowed comma-separated list of permitted RPCS 00:06:41.252 --env-context Opaque context for use of the env implementation 00:06:41.252 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:41.252 --no-huge run without using hugepages 00:06:41.252 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:41.252 -e, --tpoint-group [:] 00:06:41.252 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:06:41.252 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be[2024-12-02 07:34:06.628918] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:06:41.252 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:41.252 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:41.252 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:41.252 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:41.252 [--------- DD Options ---------] 00:06:41.252 --if Input file. Must specify either --if or --ib. 00:06:41.252 --ib Input bdev. Must specifier either --if or --ib 00:06:41.252 --of Output file. Must specify either --of or --ob. 00:06:41.252 --ob Output bdev. Must specify either --of or --ob. 00:06:41.252 --iflag Input file flags. 00:06:41.252 --oflag Output file flags. 00:06:41.252 --bs I/O unit size (default: 4096) 00:06:41.252 --qd Queue depth (default: 2) 00:06:41.252 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:41.252 --skip Skip this many I/O units at start of input. (default: 0) 00:06:41.252 --seek Skip this many I/O units at start of output. (default: 0) 00:06:41.252 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:41.252 --sparse Enable hole skipping in input target 00:06:41.252 Available iflag and oflag values: 00:06:41.252 append - append mode 00:06:41.252 direct - use direct I/O for data 00:06:41.252 directory - fail unless a directory 00:06:41.252 dsync - use synchronized I/O for data 00:06:41.252 noatime - do not update access time 00:06:41.252 noctty - do not assign controlling terminal from file 00:06:41.252 nofollow - do not follow symlinks 00:06:41.252 nonblock - use non-blocking I/O 00:06:41.252 sync - use synchronized I/O for data and metadata 00:06:41.252 ************************************ 00:06:41.252 END TEST dd_invalid_arguments 00:06:41.252 ************************************ 00:06:41.252 07:34:06 -- common/autotest_common.sh@653 -- # es=2 00:06:41.252 07:34:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.252 07:34:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.252 07:34:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.252 00:06:41.252 real 0m0.069s 00:06:41.252 user 0m0.045s 00:06:41.252 sys 0m0.022s 00:06:41.252 07:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.252 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.252 07:34:06 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:06:41.252 07:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.252 07:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.252 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.252 ************************************ 00:06:41.252 START TEST dd_double_input 00:06:41.252 ************************************ 00:06:41.252 07:34:06 -- common/autotest_common.sh@1114 -- # double_input 00:06:41.252 07:34:06 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:41.252 07:34:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.252 07:34:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:41.252 07:34:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.252 07:34:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.252 07:34:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.252 07:34:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:41.252 [2024-12-02 07:34:06.748201] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:41.252 07:34:06 -- common/autotest_common.sh@653 -- # es=22 00:06:41.252 07:34:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.252 07:34:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.252 07:34:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.252 00:06:41.252 real 0m0.073s 00:06:41.252 user 0m0.047s 00:06:41.252 sys 0m0.024s 00:06:41.252 07:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.252 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.252 ************************************ 00:06:41.252 END TEST dd_double_input 00:06:41.252 ************************************ 00:06:41.252 07:34:06 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:06:41.252 07:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.252 07:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.252 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.252 ************************************ 00:06:41.252 START TEST dd_double_output 00:06:41.252 ************************************ 00:06:41.252 07:34:06 -- common/autotest_common.sh@1114 -- # double_output 00:06:41.252 07:34:06 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:41.252 07:34:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.252 07:34:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:41.252 07:34:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.252 07:34:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.252 07:34:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.252 07:34:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.252 07:34:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:41.252 [2024-12-02 07:34:06.870666] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:41.512 07:34:06 -- common/autotest_common.sh@653 -- # es=22 00:06:41.512 07:34:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.512 07:34:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.512 07:34:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.512 00:06:41.512 real 0m0.070s 00:06:41.512 user 0m0.050s 00:06:41.512 sys 0m0.019s 00:06:41.512 07:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.512 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.512 ************************************ 00:06:41.512 END TEST dd_double_output 00:06:41.512 ************************************ 00:06:41.512 07:34:06 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:06:41.512 07:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.512 07:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.512 07:34:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.512 ************************************ 00:06:41.512 START TEST dd_no_input 00:06:41.512 ************************************ 00:06:41.512 07:34:06 -- common/autotest_common.sh@1114 -- # no_input 00:06:41.512 07:34:06 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:41.512 07:34:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.512 07:34:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:41.512 07:34:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.512 07:34:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.512 07:34:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.512 07:34:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.512 07:34:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:41.512 [2024-12-02 07:34:06.991448] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:06:41.512 07:34:07 -- common/autotest_common.sh@653 -- # es=22 00:06:41.512 07:34:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.512 07:34:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.512 07:34:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.512 00:06:41.512 real 0m0.070s 00:06:41.512 user 0m0.046s 00:06:41.512 sys 0m0.023s 00:06:41.512 ************************************ 00:06:41.512 END TEST dd_no_input 00:06:41.512 ************************************ 00:06:41.512 07:34:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.512 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:41.512 07:34:07 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:06:41.512 07:34:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.512 07:34:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.512 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:41.512 ************************************ 00:06:41.512 START TEST dd_no_output 00:06:41.512 ************************************ 00:06:41.512 07:34:07 -- common/autotest_common.sh@1114 -- # no_output 00:06:41.512 07:34:07 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:41.512 07:34:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.512 07:34:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:41.512 07:34:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.512 07:34:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.512 07:34:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.512 07:34:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.512 07:34:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.512 07:34:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:41.512 [2024-12-02 07:34:07.118449] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:06:41.772 07:34:07 -- common/autotest_common.sh@653 -- # es=22 00:06:41.772 07:34:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.772 07:34:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.772 07:34:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.772 00:06:41.772 real 0m0.074s 00:06:41.772 user 0m0.040s 00:06:41.772 sys 0m0.032s 00:06:41.772 07:34:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.772 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:41.772 ************************************ 00:06:41.772 END TEST dd_no_output 00:06:41.772 ************************************ 00:06:41.772 07:34:07 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:41.772 07:34:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.772 07:34:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.772 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:41.772 ************************************ 00:06:41.772 START TEST dd_wrong_blocksize 00:06:41.772 ************************************ 00:06:41.772 07:34:07 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:06:41.772 07:34:07 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:41.772 07:34:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.772 07:34:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:41.772 07:34:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.772 07:34:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.772 07:34:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.772 07:34:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:41.772 [2024-12-02 07:34:07.237993] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:06:41.772 07:34:07 -- common/autotest_common.sh@653 -- # es=22 00:06:41.772 07:34:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.772 07:34:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.772 07:34:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.772 00:06:41.772 real 0m0.069s 00:06:41.772 user 0m0.043s 00:06:41.772 sys 0m0.024s 00:06:41.772 07:34:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.772 ************************************ 00:06:41.772 END TEST dd_wrong_blocksize 00:06:41.772 ************************************ 00:06:41.772 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:41.772 07:34:07 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:41.772 07:34:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.772 07:34:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.772 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:41.772 ************************************ 00:06:41.772 START TEST dd_smaller_blocksize 00:06:41.772 ************************************ 00:06:41.772 07:34:07 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:06:41.772 07:34:07 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:41.772 07:34:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:41.772 07:34:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:41.772 07:34:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.772 07:34:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.772 07:34:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.772 07:34:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.772 07:34:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:41.772 [2024-12-02 07:34:07.360676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.772 [2024-12-02 07:34:07.360764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:06:42.032 [2024-12-02 07:34:07.499851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.032 [2024-12-02 07:34:07.571461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.292 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:42.292 [2024-12-02 07:34:07.885019] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:42.292 [2024-12-02 07:34:07.885061] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.551 [2024-12-02 07:34:07.944844] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:42.551 07:34:08 -- common/autotest_common.sh@653 -- # es=244 00:06:42.551 07:34:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.551 07:34:08 -- common/autotest_common.sh@662 -- # es=116 00:06:42.551 07:34:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:42.551 07:34:08 -- common/autotest_common.sh@670 -- # es=1 00:06:42.551 07:34:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.551 00:06:42.551 real 0m0.729s 00:06:42.551 user 0m0.322s 00:06:42.551 sys 0m0.301s 00:06:42.551 07:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.551 ************************************ 00:06:42.551 END TEST dd_smaller_blocksize 00:06:42.551 ************************************ 00:06:42.551 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.551 07:34:08 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:06:42.551 07:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.551 07:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.551 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.551 ************************************ 00:06:42.551 START TEST dd_invalid_count 00:06:42.551 ************************************ 00:06:42.551 07:34:08 -- common/autotest_common.sh@1114 -- # invalid_count 00:06:42.551 07:34:08 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:42.551 07:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:06:42.551 07:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:42.551 07:34:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.551 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.551 07:34:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.551 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.551 07:34:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.551 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.551 07:34:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.551 07:34:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.551 07:34:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:42.551 [2024-12-02 07:34:08.134704] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:06:42.551 07:34:08 -- common/autotest_common.sh@653 -- # es=22 00:06:42.551 07:34:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.551 07:34:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.551 07:34:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.551 00:06:42.551 real 0m0.058s 00:06:42.551 user 0m0.037s 00:06:42.551 sys 0m0.021s 00:06:42.551 07:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.551 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.551 ************************************ 00:06:42.551 END TEST dd_invalid_count 00:06:42.551 ************************************ 00:06:42.811 07:34:08 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:06:42.811 07:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.811 07:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.811 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.811 ************************************ 00:06:42.811 START TEST dd_invalid_oflag 00:06:42.811 ************************************ 00:06:42.811 07:34:08 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:06:42.811 07:34:08 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:42.811 07:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:06:42.811 07:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:42.811 07:34:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.811 07:34:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.811 07:34:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.811 07:34:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:42.811 [2024-12-02 07:34:08.246533] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:06:42.811 07:34:08 -- common/autotest_common.sh@653 -- # es=22 00:06:42.811 07:34:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.811 07:34:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.811 07:34:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.811 00:06:42.811 real 0m0.064s 00:06:42.811 user 0m0.040s 00:06:42.811 sys 0m0.024s 00:06:42.811 ************************************ 00:06:42.811 END TEST dd_invalid_oflag 00:06:42.811 ************************************ 00:06:42.811 07:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.811 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.811 07:34:08 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:06:42.811 07:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.811 07:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.811 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.811 ************************************ 00:06:42.811 START TEST dd_invalid_iflag 00:06:42.811 ************************************ 00:06:42.811 07:34:08 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:06:42.811 07:34:08 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:42.811 07:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:06:42.811 07:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:42.811 07:34:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.811 07:34:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.811 07:34:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.811 07:34:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.811 07:34:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:42.811 [2024-12-02 07:34:08.362748] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:06:42.811 07:34:08 -- common/autotest_common.sh@653 -- # es=22 00:06:42.811 07:34:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.811 07:34:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.811 07:34:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.811 00:06:42.811 real 0m0.071s 00:06:42.811 user 0m0.047s 00:06:42.811 sys 0m0.024s 00:06:42.812 ************************************ 00:06:42.812 END TEST dd_invalid_iflag 00:06:42.812 ************************************ 00:06:42.812 07:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.812 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.812 07:34:08 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:06:42.812 07:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.812 07:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.812 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:42.812 ************************************ 00:06:42.812 START TEST dd_unknown_flag 00:06:42.812 ************************************ 00:06:42.812 07:34:08 -- common/autotest_common.sh@1114 -- # unknown_flag 00:06:42.812 07:34:08 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:42.812 07:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:06:42.812 07:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:42.812 07:34:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.071 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.071 07:34:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.071 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.071 07:34:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.071 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.071 07:34:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.071 07:34:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.071 07:34:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:43.071 [2024-12-02 07:34:08.475269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.071 [2024-12-02 07:34:08.475360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59669 ] 00:06:43.071 [2024-12-02 07:34:08.604285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.071 [2024-12-02 07:34:08.658523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.331 [2024-12-02 07:34:08.705391] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:06:43.331 [2024-12-02 07:34:08.705474] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:06:43.331 [2024-12-02 07:34:08.705485] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:06:43.331 [2024-12-02 07:34:08.705495] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.331 [2024-12-02 07:34:08.761476] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:43.331 07:34:08 -- common/autotest_common.sh@653 -- # es=236 00:06:43.331 07:34:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.331 07:34:08 -- common/autotest_common.sh@662 -- # es=108 00:06:43.331 07:34:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:43.331 07:34:08 -- common/autotest_common.sh@670 -- # es=1 00:06:43.331 07:34:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.331 00:06:43.331 real 0m0.418s 00:06:43.331 user 0m0.218s 00:06:43.331 sys 0m0.097s 00:06:43.331 07:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.331 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:43.331 ************************************ 00:06:43.331 END TEST dd_unknown_flag 00:06:43.331 ************************************ 00:06:43.331 07:34:08 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:06:43.331 07:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.331 07:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.331 07:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:43.331 ************************************ 00:06:43.331 START TEST dd_invalid_json 00:06:43.331 ************************************ 00:06:43.331 07:34:08 -- common/autotest_common.sh@1114 -- # invalid_json 00:06:43.331 07:34:08 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:43.331 07:34:08 -- common/autotest_common.sh@650 -- # local es=0 00:06:43.331 07:34:08 -- dd/negative_dd.sh@95 -- # : 00:06:43.331 07:34:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:43.331 07:34:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.331 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.331 07:34:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.331 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.331 07:34:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.331 07:34:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.331 07:34:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.331 07:34:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.331 07:34:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:43.590 [2024-12-02 07:34:08.956344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.590 [2024-12-02 07:34:08.956434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:06:43.590 [2024-12-02 07:34:09.089316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.590 [2024-12-02 07:34:09.138548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.590 [2024-12-02 07:34:09.138680] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:06:43.590 [2024-12-02 07:34:09.138699] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.590 [2024-12-02 07:34:09.138753] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:43.850 07:34:09 -- common/autotest_common.sh@653 -- # es=234 00:06:43.850 07:34:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.850 07:34:09 -- common/autotest_common.sh@662 -- # es=106 00:06:43.850 07:34:09 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:43.850 07:34:09 -- common/autotest_common.sh@670 -- # es=1 00:06:43.850 07:34:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.850 00:06:43.850 real 0m0.318s 00:06:43.850 user 0m0.162s 00:06:43.850 sys 0m0.056s 00:06:43.850 07:34:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.850 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 ************************************ 00:06:43.850 END TEST dd_invalid_json 00:06:43.850 ************************************ 00:06:43.850 ************************************ 00:06:43.850 END TEST spdk_dd_negative 00:06:43.850 ************************************ 00:06:43.850 00:06:43.850 real 0m2.873s 00:06:43.850 user 0m1.379s 00:06:43.850 sys 0m1.104s 00:06:43.850 07:34:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.850 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 00:06:43.850 real 1m3.552s 00:06:43.850 user 0m39.297s 00:06:43.850 sys 0m15.275s 00:06:43.850 07:34:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.850 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 ************************************ 00:06:43.850 END TEST spdk_dd 00:06:43.850 ************************************ 00:06:43.850 07:34:09 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:06:43.850 07:34:09 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:06:43.850 07:34:09 -- spdk/autotest.sh@255 -- # timing_exit lib 00:06:43.850 07:34:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:43.850 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 07:34:09 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:06:43.850 07:34:09 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:43.850 07:34:09 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:06:43.850 07:34:09 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:06:43.850 07:34:09 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:06:43.850 07:34:09 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:06:43.850 07:34:09 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.850 07:34:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:43.850 07:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.850 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 ************************************ 00:06:43.850 START TEST nvmf_tcp 00:06:43.850 ************************************ 00:06:43.850 07:34:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.850 * Looking for test storage... 00:06:43.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:43.850 07:34:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:43.850 07:34:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:43.850 07:34:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:44.110 07:34:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:44.110 07:34:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:44.110 07:34:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:44.110 07:34:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:44.110 07:34:09 -- scripts/common.sh@335 -- # IFS=.-: 00:06:44.110 07:34:09 -- scripts/common.sh@335 -- # read -ra ver1 00:06:44.110 07:34:09 -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.110 07:34:09 -- scripts/common.sh@336 -- # read -ra ver2 00:06:44.110 07:34:09 -- scripts/common.sh@337 -- # local 'op=<' 00:06:44.110 07:34:09 -- scripts/common.sh@339 -- # ver1_l=2 00:06:44.110 07:34:09 -- scripts/common.sh@340 -- # ver2_l=1 00:06:44.110 07:34:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:44.110 07:34:09 -- scripts/common.sh@343 -- # case "$op" in 00:06:44.110 07:34:09 -- scripts/common.sh@344 -- # : 1 00:06:44.110 07:34:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:44.110 07:34:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.110 07:34:09 -- scripts/common.sh@364 -- # decimal 1 00:06:44.110 07:34:09 -- scripts/common.sh@352 -- # local d=1 00:06:44.110 07:34:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.110 07:34:09 -- scripts/common.sh@354 -- # echo 1 00:06:44.110 07:34:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:44.110 07:34:09 -- scripts/common.sh@365 -- # decimal 2 00:06:44.110 07:34:09 -- scripts/common.sh@352 -- # local d=2 00:06:44.110 07:34:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.110 07:34:09 -- scripts/common.sh@354 -- # echo 2 00:06:44.111 07:34:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:44.111 07:34:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:44.111 07:34:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:44.111 07:34:09 -- scripts/common.sh@367 -- # return 0 00:06:44.111 07:34:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.111 07:34:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:44.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.111 --rc genhtml_branch_coverage=1 00:06:44.111 --rc genhtml_function_coverage=1 00:06:44.111 --rc genhtml_legend=1 00:06:44.111 --rc geninfo_all_blocks=1 00:06:44.111 --rc geninfo_unexecuted_blocks=1 00:06:44.111 00:06:44.111 ' 00:06:44.111 07:34:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:44.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.111 --rc genhtml_branch_coverage=1 00:06:44.111 --rc genhtml_function_coverage=1 00:06:44.111 --rc genhtml_legend=1 00:06:44.111 --rc geninfo_all_blocks=1 00:06:44.111 --rc geninfo_unexecuted_blocks=1 00:06:44.111 00:06:44.111 ' 00:06:44.111 07:34:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:44.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.111 --rc genhtml_branch_coverage=1 00:06:44.111 --rc genhtml_function_coverage=1 00:06:44.111 --rc genhtml_legend=1 00:06:44.111 --rc geninfo_all_blocks=1 00:06:44.111 --rc geninfo_unexecuted_blocks=1 00:06:44.111 00:06:44.111 ' 00:06:44.111 07:34:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:44.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.111 --rc genhtml_branch_coverage=1 00:06:44.111 --rc genhtml_function_coverage=1 00:06:44.111 --rc genhtml_legend=1 00:06:44.111 --rc geninfo_all_blocks=1 00:06:44.111 --rc geninfo_unexecuted_blocks=1 00:06:44.111 00:06:44.111 ' 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.111 07:34:09 -- nvmf/common.sh@7 -- # uname -s 00:06:44.111 07:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.111 07:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.111 07:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.111 07:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.111 07:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.111 07:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.111 07:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.111 07:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.111 07:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.111 07:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.111 07:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:06:44.111 07:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:06:44.111 07:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.111 07:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.111 07:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:44.111 07:34:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.111 07:34:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.111 07:34:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.111 07:34:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.111 07:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.111 07:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.111 07:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.111 07:34:09 -- paths/export.sh@5 -- # export PATH 00:06:44.111 07:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.111 07:34:09 -- nvmf/common.sh@46 -- # : 0 00:06:44.111 07:34:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:44.111 07:34:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:44.111 07:34:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:44.111 07:34:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.111 07:34:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.111 07:34:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:44.111 07:34:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:44.111 07:34:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:44.111 07:34:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:44.111 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:06:44.111 07:34:09 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:44.111 07:34:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:44.111 07:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.111 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:44.111 ************************************ 00:06:44.111 START TEST nvmf_host_management 00:06:44.111 ************************************ 00:06:44.111 07:34:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:44.111 * Looking for test storage... 00:06:44.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:44.111 07:34:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:44.111 07:34:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:44.111 07:34:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:44.371 07:34:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:44.371 07:34:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:44.371 07:34:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:44.371 07:34:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:44.371 07:34:09 -- scripts/common.sh@335 -- # IFS=.-: 00:06:44.371 07:34:09 -- scripts/common.sh@335 -- # read -ra ver1 00:06:44.371 07:34:09 -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.371 07:34:09 -- scripts/common.sh@336 -- # read -ra ver2 00:06:44.371 07:34:09 -- scripts/common.sh@337 -- # local 'op=<' 00:06:44.371 07:34:09 -- scripts/common.sh@339 -- # ver1_l=2 00:06:44.371 07:34:09 -- scripts/common.sh@340 -- # ver2_l=1 00:06:44.371 07:34:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:44.371 07:34:09 -- scripts/common.sh@343 -- # case "$op" in 00:06:44.371 07:34:09 -- scripts/common.sh@344 -- # : 1 00:06:44.371 07:34:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:44.371 07:34:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.371 07:34:09 -- scripts/common.sh@364 -- # decimal 1 00:06:44.371 07:34:09 -- scripts/common.sh@352 -- # local d=1 00:06:44.371 07:34:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.371 07:34:09 -- scripts/common.sh@354 -- # echo 1 00:06:44.371 07:34:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:44.371 07:34:09 -- scripts/common.sh@365 -- # decimal 2 00:06:44.371 07:34:09 -- scripts/common.sh@352 -- # local d=2 00:06:44.371 07:34:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.371 07:34:09 -- scripts/common.sh@354 -- # echo 2 00:06:44.371 07:34:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:44.371 07:34:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:44.371 07:34:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:44.371 07:34:09 -- scripts/common.sh@367 -- # return 0 00:06:44.371 07:34:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.371 07:34:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:44.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.371 --rc genhtml_branch_coverage=1 00:06:44.371 --rc genhtml_function_coverage=1 00:06:44.371 --rc genhtml_legend=1 00:06:44.371 --rc geninfo_all_blocks=1 00:06:44.371 --rc geninfo_unexecuted_blocks=1 00:06:44.371 00:06:44.371 ' 00:06:44.371 07:34:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:44.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.371 --rc genhtml_branch_coverage=1 00:06:44.371 --rc genhtml_function_coverage=1 00:06:44.371 --rc genhtml_legend=1 00:06:44.371 --rc geninfo_all_blocks=1 00:06:44.371 --rc geninfo_unexecuted_blocks=1 00:06:44.371 00:06:44.371 ' 00:06:44.371 07:34:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:44.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.371 --rc genhtml_branch_coverage=1 00:06:44.371 --rc genhtml_function_coverage=1 00:06:44.371 --rc genhtml_legend=1 00:06:44.371 --rc geninfo_all_blocks=1 00:06:44.371 --rc geninfo_unexecuted_blocks=1 00:06:44.371 00:06:44.371 ' 00:06:44.371 07:34:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:44.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.371 --rc genhtml_branch_coverage=1 00:06:44.371 --rc genhtml_function_coverage=1 00:06:44.371 --rc genhtml_legend=1 00:06:44.371 --rc geninfo_all_blocks=1 00:06:44.371 --rc geninfo_unexecuted_blocks=1 00:06:44.371 00:06:44.371 ' 00:06:44.371 07:34:09 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.371 07:34:09 -- nvmf/common.sh@7 -- # uname -s 00:06:44.371 07:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.371 07:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.371 07:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.371 07:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.371 07:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.372 07:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.372 07:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.372 07:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.372 07:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.372 07:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.372 07:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:06:44.372 07:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:06:44.372 07:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.372 07:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.372 07:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:44.372 07:34:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.372 07:34:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.372 07:34:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.372 07:34:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.372 07:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.372 07:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.372 07:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.372 07:34:09 -- paths/export.sh@5 -- # export PATH 00:06:44.372 07:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.372 07:34:09 -- nvmf/common.sh@46 -- # : 0 00:06:44.372 07:34:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:44.372 07:34:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:44.372 07:34:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:44.372 07:34:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.372 07:34:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.372 07:34:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:44.372 07:34:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:44.372 07:34:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:44.372 07:34:09 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.372 07:34:09 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:44.372 07:34:09 -- target/host_management.sh@104 -- # nvmftestinit 00:06:44.372 07:34:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:44.372 07:34:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.372 07:34:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:44.372 07:34:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:44.372 07:34:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:44.372 07:34:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.372 07:34:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.372 07:34:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.372 07:34:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:44.372 07:34:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:44.372 07:34:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:44.372 07:34:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:44.372 07:34:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:44.372 07:34:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:44.372 07:34:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.372 07:34:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.372 07:34:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:44.372 07:34:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:44.372 07:34:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:44.372 07:34:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:44.372 07:34:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:44.372 07:34:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.372 07:34:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:44.372 07:34:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:44.372 07:34:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:44.372 07:34:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:44.372 07:34:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:44.372 Cannot find device "nvmf_init_br" 00:06:44.372 07:34:09 -- nvmf/common.sh@153 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:44.372 Cannot find device "nvmf_tgt_br" 00:06:44.372 07:34:09 -- nvmf/common.sh@154 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:44.372 Cannot find device "nvmf_tgt_br2" 00:06:44.372 07:34:09 -- nvmf/common.sh@155 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:44.372 Cannot find device "nvmf_init_br" 00:06:44.372 07:34:09 -- nvmf/common.sh@156 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:44.372 Cannot find device "nvmf_tgt_br" 00:06:44.372 07:34:09 -- nvmf/common.sh@157 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:44.372 Cannot find device "nvmf_tgt_br2" 00:06:44.372 07:34:09 -- nvmf/common.sh@158 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:44.372 Cannot find device "nvmf_br" 00:06:44.372 07:34:09 -- nvmf/common.sh@159 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:44.372 Cannot find device "nvmf_init_if" 00:06:44.372 07:34:09 -- nvmf/common.sh@160 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:44.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.372 07:34:09 -- nvmf/common.sh@161 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:44.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.372 07:34:09 -- nvmf/common.sh@162 -- # true 00:06:44.372 07:34:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:44.372 07:34:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:44.372 07:34:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:44.372 07:34:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:44.372 07:34:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:44.372 07:34:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:44.631 07:34:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:44.631 07:34:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:44.631 07:34:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:44.631 07:34:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:44.631 07:34:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:44.631 07:34:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:44.631 07:34:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:44.631 07:34:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:44.631 07:34:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:44.631 07:34:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:44.631 07:34:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:44.631 07:34:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:44.631 07:34:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:44.631 07:34:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:44.631 07:34:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:44.631 07:34:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:44.632 07:34:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:44.632 07:34:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:44.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:06:44.632 00:06:44.632 --- 10.0.0.2 ping statistics --- 00:06:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.632 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:06:44.632 07:34:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:44.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:44.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:06:44.632 00:06:44.632 --- 10.0.0.3 ping statistics --- 00:06:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.632 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:06:44.632 07:34:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:44.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:06:44.632 00:06:44.632 --- 10.0.0.1 ping statistics --- 00:06:44.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.632 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:06:44.632 07:34:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.632 07:34:10 -- nvmf/common.sh@421 -- # return 0 00:06:44.632 07:34:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:44.632 07:34:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.632 07:34:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:44.632 07:34:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:44.632 07:34:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.632 07:34:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:44.632 07:34:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:44.632 07:34:10 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:06:44.632 07:34:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.632 07:34:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.632 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:44.891 ************************************ 00:06:44.891 START TEST nvmf_host_management 00:06:44.891 ************************************ 00:06:44.891 07:34:10 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:06:44.891 07:34:10 -- target/host_management.sh@69 -- # starttarget 00:06:44.891 07:34:10 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:44.891 07:34:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:44.891 07:34:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:44.891 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:44.891 07:34:10 -- nvmf/common.sh@469 -- # nvmfpid=59969 00:06:44.891 07:34:10 -- nvmf/common.sh@470 -- # waitforlisten 59969 00:06:44.891 07:34:10 -- common/autotest_common.sh@829 -- # '[' -z 59969 ']' 00:06:44.891 07:34:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.891 07:34:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:44.891 07:34:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.891 07:34:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.891 07:34:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.891 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:44.891 [2024-12-02 07:34:10.319984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.891 [2024-12-02 07:34:10.320087] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.891 [2024-12-02 07:34:10.462129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.151 [2024-12-02 07:34:10.536526] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.151 [2024-12-02 07:34:10.537139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.151 [2024-12-02 07:34:10.537370] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.151 [2024-12-02 07:34:10.537565] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.151 [2024-12-02 07:34:10.537955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.151 [2024-12-02 07:34:10.538016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.151 [2024-12-02 07:34:10.538142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.151 [2024-12-02 07:34:10.538538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.722 07:34:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.722 07:34:11 -- common/autotest_common.sh@862 -- # return 0 00:06:45.722 07:34:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:45.722 07:34:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.722 07:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.011 07:34:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.011 07:34:11 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:46.011 07:34:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.011 07:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.011 [2024-12-02 07:34:11.358085] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.011 07:34:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.011 07:34:11 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:46.011 07:34:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:46.011 07:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.011 07:34:11 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:46.011 07:34:11 -- target/host_management.sh@23 -- # cat 00:06:46.011 07:34:11 -- target/host_management.sh@30 -- # rpc_cmd 00:06:46.011 07:34:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.011 07:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.011 Malloc0 00:06:46.011 [2024-12-02 07:34:11.424634] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.011 07:34:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.011 07:34:11 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:46.011 07:34:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:46.011 07:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:46.011 07:34:11 -- target/host_management.sh@73 -- # perfpid=60028 00:06:46.011 07:34:11 -- target/host_management.sh@74 -- # waitforlisten 60028 /var/tmp/bdevperf.sock 00:06:46.011 07:34:11 -- common/autotest_common.sh@829 -- # '[' -z 60028 ']' 00:06:46.011 07:34:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:46.011 07:34:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.011 07:34:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:46.011 07:34:11 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:46.011 07:34:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.011 07:34:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.011 07:34:11 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:46.011 07:34:11 -- nvmf/common.sh@520 -- # config=() 00:06:46.011 07:34:11 -- nvmf/common.sh@520 -- # local subsystem config 00:06:46.011 07:34:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:06:46.011 07:34:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:06:46.011 { 00:06:46.011 "params": { 00:06:46.011 "name": "Nvme$subsystem", 00:06:46.011 "trtype": "$TEST_TRANSPORT", 00:06:46.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:46.011 "adrfam": "ipv4", 00:06:46.011 "trsvcid": "$NVMF_PORT", 00:06:46.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:46.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:46.011 "hdgst": ${hdgst:-false}, 00:06:46.011 "ddgst": ${ddgst:-false} 00:06:46.011 }, 00:06:46.011 "method": "bdev_nvme_attach_controller" 00:06:46.011 } 00:06:46.011 EOF 00:06:46.011 )") 00:06:46.011 07:34:11 -- nvmf/common.sh@542 -- # cat 00:06:46.011 07:34:11 -- nvmf/common.sh@544 -- # jq . 00:06:46.011 07:34:11 -- nvmf/common.sh@545 -- # IFS=, 00:06:46.011 07:34:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:06:46.011 "params": { 00:06:46.011 "name": "Nvme0", 00:06:46.011 "trtype": "tcp", 00:06:46.011 "traddr": "10.0.0.2", 00:06:46.011 "adrfam": "ipv4", 00:06:46.011 "trsvcid": "4420", 00:06:46.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:46.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:46.011 "hdgst": false, 00:06:46.011 "ddgst": false 00:06:46.011 }, 00:06:46.011 "method": "bdev_nvme_attach_controller" 00:06:46.011 }' 00:06:46.011 [2024-12-02 07:34:11.525105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.011 [2024-12-02 07:34:11.525196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60028 ] 00:06:46.288 [2024-12-02 07:34:11.666087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.288 [2024-12-02 07:34:11.733840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.288 Running I/O for 10 seconds... 00:06:47.240 07:34:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.240 07:34:12 -- common/autotest_common.sh@862 -- # return 0 00:06:47.240 07:34:12 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:47.240 07:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.240 07:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:47.240 07:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.240 07:34:12 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.240 07:34:12 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:47.240 07:34:12 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:47.240 07:34:12 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:47.240 07:34:12 -- target/host_management.sh@52 -- # local ret=1 00:06:47.240 07:34:12 -- target/host_management.sh@53 -- # local i 00:06:47.240 07:34:12 -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:47.240 07:34:12 -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:47.240 07:34:12 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:47.240 07:34:12 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:47.240 07:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.240 07:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:47.240 07:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.240 07:34:12 -- target/host_management.sh@55 -- # read_io_count=2044 00:06:47.240 07:34:12 -- target/host_management.sh@58 -- # '[' 2044 -ge 100 ']' 00:06:47.240 07:34:12 -- target/host_management.sh@59 -- # ret=0 00:06:47.240 07:34:12 -- target/host_management.sh@60 -- # break 00:06:47.240 07:34:12 -- target/host_management.sh@64 -- # return 0 00:06:47.241 07:34:12 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:47.241 07:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.241 07:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:47.241 07:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.241 [2024-12-02 07:34:12.610924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.610971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 07:34:12 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:47.241 [2024-12-02 07:34:12.611033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 07:34:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.241 [2024-12-02 07:34:12.611234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 07:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:47.241 [2024-12-02 07:34:12.611448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.241 [2024-12-02 07:34:12.611834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.241 [2024-12-02 07:34:12.611843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.611854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.611864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.611875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.611884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.611896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.611905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.611916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.611925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.611936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.611946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.611972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.611981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.611992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.242 [2024-12-02 07:34:12.612388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285400 is same with the state(5) to be set 00:06:47.242 [2024-12-02 07:34:12.612448] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1285400 was disconnected and freed. reset controller. 00:06:47.242 [2024-12-02 07:34:12.612549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.242 [2024-12-02 07:34:12.612576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.242 [2024-12-02 07:34:12.612597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.242 [2024-12-02 07:34:12.612620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:47.242 [2024-12-02 07:34:12.612639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:47.242 [2024-12-02 07:34:12.612648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab150 is same with the state(5) to be set 00:06:47.242 [2024-12-02 07:34:12.613799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:47.242 task offset: 24192 on job bdev=Nvme0n1 fails 00:06:47.242 00:06:47.242 Latency(us) 00:06:47.242 [2024-12-02T07:34:12.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.242 [2024-12-02T07:34:12.866Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:47.242 [2024-12-02T07:34:12.866Z] Job: Nvme0n1 ended in about 0.73 seconds with error 00:06:47.242 Verification LBA range: start 0x0 length 0x400 00:06:47.242 Nvme0n1 : 0.73 3021.44 188.84 87.18 0.00 20246.88 2144.81 28716.68 00:06:47.242 [2024-12-02T07:34:12.866Z] =================================================================================================================== 00:06:47.242 [2024-12-02T07:34:12.866Z] Total : 3021.44 188.84 87.18 0.00 20246.88 2144.81 28716.68 00:06:47.242 [2024-12-02 07:34:12.615866] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.242 [2024-12-02 07:34:12.615894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ab150 (9): Bad file descriptor 00:06:47.242 07:34:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.242 07:34:12 -- target/host_management.sh@87 -- # sleep 1 00:06:47.242 [2024-12-02 07:34:12.626850] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:48.178 07:34:13 -- target/host_management.sh@91 -- # kill -9 60028 00:06:48.178 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (60028) - No such process 00:06:48.178 07:34:13 -- target/host_management.sh@91 -- # true 00:06:48.178 07:34:13 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:48.178 07:34:13 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:48.178 07:34:13 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:48.178 07:34:13 -- nvmf/common.sh@520 -- # config=() 00:06:48.178 07:34:13 -- nvmf/common.sh@520 -- # local subsystem config 00:06:48.178 07:34:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:06:48.178 07:34:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:06:48.178 { 00:06:48.178 "params": { 00:06:48.178 "name": "Nvme$subsystem", 00:06:48.178 "trtype": "$TEST_TRANSPORT", 00:06:48.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:48.178 "adrfam": "ipv4", 00:06:48.178 "trsvcid": "$NVMF_PORT", 00:06:48.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:48.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:48.178 "hdgst": ${hdgst:-false}, 00:06:48.178 "ddgst": ${ddgst:-false} 00:06:48.178 }, 00:06:48.178 "method": "bdev_nvme_attach_controller" 00:06:48.178 } 00:06:48.178 EOF 00:06:48.178 )") 00:06:48.178 07:34:13 -- nvmf/common.sh@542 -- # cat 00:06:48.178 07:34:13 -- nvmf/common.sh@544 -- # jq . 00:06:48.178 07:34:13 -- nvmf/common.sh@545 -- # IFS=, 00:06:48.178 07:34:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:06:48.178 "params": { 00:06:48.178 "name": "Nvme0", 00:06:48.178 "trtype": "tcp", 00:06:48.178 "traddr": "10.0.0.2", 00:06:48.178 "adrfam": "ipv4", 00:06:48.178 "trsvcid": "4420", 00:06:48.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:48.178 "hdgst": false, 00:06:48.178 "ddgst": false 00:06:48.178 }, 00:06:48.178 "method": "bdev_nvme_attach_controller" 00:06:48.178 }' 00:06:48.178 [2024-12-02 07:34:13.673389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.178 [2024-12-02 07:34:13.673461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60066 ] 00:06:48.437 [2024-12-02 07:34:13.807901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.437 [2024-12-02 07:34:13.860196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.437 Running I/O for 1 seconds... 00:06:49.815 00:06:49.815 Latency(us) 00:06:49.815 [2024-12-02T07:34:15.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.815 [2024-12-02T07:34:15.439Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:49.815 Verification LBA range: start 0x0 length 0x400 00:06:49.815 Nvme0n1 : 1.01 3232.20 202.01 0.00 0.00 19517.77 1995.87 23831.27 00:06:49.815 [2024-12-02T07:34:15.439Z] =================================================================================================================== 00:06:49.815 [2024-12-02T07:34:15.439Z] Total : 3232.20 202.01 0.00 0.00 19517.77 1995.87 23831.27 00:06:49.815 07:34:15 -- target/host_management.sh@101 -- # stoptarget 00:06:49.815 07:34:15 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:49.815 07:34:15 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:49.815 07:34:15 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:49.815 07:34:15 -- target/host_management.sh@40 -- # nvmftestfini 00:06:49.815 07:34:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:49.815 07:34:15 -- nvmf/common.sh@116 -- # sync 00:06:49.815 07:34:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:49.815 07:34:15 -- nvmf/common.sh@119 -- # set +e 00:06:49.815 07:34:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:49.815 07:34:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:49.815 rmmod nvme_tcp 00:06:49.815 rmmod nvme_fabrics 00:06:49.815 rmmod nvme_keyring 00:06:49.815 07:34:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:49.815 07:34:15 -- nvmf/common.sh@123 -- # set -e 00:06:49.815 07:34:15 -- nvmf/common.sh@124 -- # return 0 00:06:49.815 07:34:15 -- nvmf/common.sh@477 -- # '[' -n 59969 ']' 00:06:49.815 07:34:15 -- nvmf/common.sh@478 -- # killprocess 59969 00:06:49.815 07:34:15 -- common/autotest_common.sh@936 -- # '[' -z 59969 ']' 00:06:49.815 07:34:15 -- common/autotest_common.sh@940 -- # kill -0 59969 00:06:49.815 07:34:15 -- common/autotest_common.sh@941 -- # uname 00:06:49.815 07:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.815 07:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59969 00:06:49.815 07:34:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:06:49.815 07:34:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:06:49.815 killing process with pid 59969 00:06:49.815 07:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59969' 00:06:49.815 07:34:15 -- common/autotest_common.sh@955 -- # kill 59969 00:06:49.815 07:34:15 -- common/autotest_common.sh@960 -- # wait 59969 00:06:50.074 [2024-12-02 07:34:15.498449] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:50.074 07:34:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:50.074 07:34:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:50.074 07:34:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:50.074 07:34:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.074 07:34:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:50.074 07:34:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.074 07:34:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.074 07:34:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.074 07:34:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:50.074 00:06:50.074 real 0m5.305s 00:06:50.074 user 0m22.453s 00:06:50.074 sys 0m1.182s 00:06:50.074 07:34:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.074 ************************************ 00:06:50.074 07:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:50.074 END TEST nvmf_host_management 00:06:50.074 ************************************ 00:06:50.074 07:34:15 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:06:50.074 00:06:50.074 real 0m6.000s 00:06:50.074 user 0m22.666s 00:06:50.074 sys 0m1.458s 00:06:50.074 07:34:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.074 ************************************ 00:06:50.074 END TEST nvmf_host_management 00:06:50.074 ************************************ 00:06:50.074 07:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:50.074 07:34:15 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:50.074 07:34:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:50.074 07:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.074 07:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:50.074 ************************************ 00:06:50.074 START TEST nvmf_lvol 00:06:50.074 ************************************ 00:06:50.074 07:34:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:50.333 * Looking for test storage... 00:06:50.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.333 07:34:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.333 07:34:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.333 07:34:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.333 07:34:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.333 07:34:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.333 07:34:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.333 07:34:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.333 07:34:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.333 07:34:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.333 07:34:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.333 07:34:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.333 07:34:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.333 07:34:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.333 07:34:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.333 07:34:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.333 07:34:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.333 07:34:15 -- scripts/common.sh@344 -- # : 1 00:06:50.333 07:34:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.333 07:34:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.333 07:34:15 -- scripts/common.sh@364 -- # decimal 1 00:06:50.333 07:34:15 -- scripts/common.sh@352 -- # local d=1 00:06:50.333 07:34:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.333 07:34:15 -- scripts/common.sh@354 -- # echo 1 00:06:50.333 07:34:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.333 07:34:15 -- scripts/common.sh@365 -- # decimal 2 00:06:50.333 07:34:15 -- scripts/common.sh@352 -- # local d=2 00:06:50.333 07:34:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.333 07:34:15 -- scripts/common.sh@354 -- # echo 2 00:06:50.333 07:34:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.333 07:34:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.333 07:34:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.333 07:34:15 -- scripts/common.sh@367 -- # return 0 00:06:50.333 07:34:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.333 07:34:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.333 --rc genhtml_branch_coverage=1 00:06:50.333 --rc genhtml_function_coverage=1 00:06:50.333 --rc genhtml_legend=1 00:06:50.333 --rc geninfo_all_blocks=1 00:06:50.333 --rc geninfo_unexecuted_blocks=1 00:06:50.333 00:06:50.333 ' 00:06:50.333 07:34:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.333 --rc genhtml_branch_coverage=1 00:06:50.333 --rc genhtml_function_coverage=1 00:06:50.333 --rc genhtml_legend=1 00:06:50.333 --rc geninfo_all_blocks=1 00:06:50.333 --rc geninfo_unexecuted_blocks=1 00:06:50.333 00:06:50.333 ' 00:06:50.333 07:34:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.333 --rc genhtml_branch_coverage=1 00:06:50.333 --rc genhtml_function_coverage=1 00:06:50.333 --rc genhtml_legend=1 00:06:50.333 --rc geninfo_all_blocks=1 00:06:50.333 --rc geninfo_unexecuted_blocks=1 00:06:50.333 00:06:50.333 ' 00:06:50.333 07:34:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.333 --rc genhtml_branch_coverage=1 00:06:50.333 --rc genhtml_function_coverage=1 00:06:50.333 --rc genhtml_legend=1 00:06:50.334 --rc geninfo_all_blocks=1 00:06:50.334 --rc geninfo_unexecuted_blocks=1 00:06:50.334 00:06:50.334 ' 00:06:50.334 07:34:15 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:50.334 07:34:15 -- nvmf/common.sh@7 -- # uname -s 00:06:50.334 07:34:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.334 07:34:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.334 07:34:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.334 07:34:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.334 07:34:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.334 07:34:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.334 07:34:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.334 07:34:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.334 07:34:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.334 07:34:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.334 07:34:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:06:50.334 07:34:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:06:50.334 07:34:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.334 07:34:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.334 07:34:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:50.334 07:34:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.334 07:34:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.334 07:34:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.334 07:34:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.334 07:34:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.334 07:34:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.334 07:34:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.334 07:34:15 -- paths/export.sh@5 -- # export PATH 00:06:50.334 07:34:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.334 07:34:15 -- nvmf/common.sh@46 -- # : 0 00:06:50.334 07:34:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:50.334 07:34:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:50.334 07:34:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:50.334 07:34:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.334 07:34:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.334 07:34:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:50.334 07:34:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:50.334 07:34:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:50.334 07:34:15 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.334 07:34:15 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:50.334 07:34:15 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:50.334 07:34:15 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:50.334 07:34:15 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:50.334 07:34:15 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:50.334 07:34:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:50.334 07:34:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.334 07:34:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:50.334 07:34:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:50.334 07:34:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:50.334 07:34:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.334 07:34:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.334 07:34:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.334 07:34:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:50.334 07:34:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:50.334 07:34:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:50.334 07:34:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:50.334 07:34:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:50.334 07:34:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:50.334 07:34:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.334 07:34:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.334 07:34:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:50.334 07:34:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:50.334 07:34:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:50.334 07:34:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:50.334 07:34:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:50.334 07:34:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.334 07:34:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:50.334 07:34:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:50.334 07:34:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:50.334 07:34:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:50.334 07:34:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:50.334 07:34:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:50.334 Cannot find device "nvmf_tgt_br" 00:06:50.334 07:34:15 -- nvmf/common.sh@154 -- # true 00:06:50.334 07:34:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:50.334 Cannot find device "nvmf_tgt_br2" 00:06:50.334 07:34:15 -- nvmf/common.sh@155 -- # true 00:06:50.334 07:34:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:50.334 07:34:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:50.334 Cannot find device "nvmf_tgt_br" 00:06:50.334 07:34:15 -- nvmf/common.sh@157 -- # true 00:06:50.334 07:34:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:50.334 Cannot find device "nvmf_tgt_br2" 00:06:50.334 07:34:15 -- nvmf/common.sh@158 -- # true 00:06:50.334 07:34:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:50.334 07:34:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:50.334 07:34:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:50.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.594 07:34:15 -- nvmf/common.sh@161 -- # true 00:06:50.594 07:34:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:50.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.594 07:34:15 -- nvmf/common.sh@162 -- # true 00:06:50.594 07:34:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:50.594 07:34:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:50.594 07:34:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:50.594 07:34:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:50.594 07:34:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:50.594 07:34:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:50.594 07:34:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:50.594 07:34:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:50.594 07:34:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:50.594 07:34:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:50.594 07:34:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:50.594 07:34:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:50.594 07:34:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:50.594 07:34:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:50.594 07:34:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:50.594 07:34:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:50.594 07:34:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:50.594 07:34:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:50.594 07:34:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:50.594 07:34:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:50.594 07:34:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:50.594 07:34:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:50.594 07:34:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:50.594 07:34:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:50.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:06:50.594 00:06:50.594 --- 10.0.0.2 ping statistics --- 00:06:50.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.594 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:50.594 07:34:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:50.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:50.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:06:50.594 00:06:50.594 --- 10.0.0.3 ping statistics --- 00:06:50.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.594 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:06:50.594 07:34:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:50.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:06:50.594 00:06:50.594 --- 10.0.0.1 ping statistics --- 00:06:50.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.594 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:06:50.594 07:34:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.594 07:34:16 -- nvmf/common.sh@421 -- # return 0 00:06:50.594 07:34:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:50.594 07:34:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.594 07:34:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:50.594 07:34:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:50.594 07:34:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.594 07:34:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:50.594 07:34:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:50.594 07:34:16 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:50.594 07:34:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:50.594 07:34:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:50.594 07:34:16 -- common/autotest_common.sh@10 -- # set +x 00:06:50.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.594 07:34:16 -- nvmf/common.sh@469 -- # nvmfpid=60300 00:06:50.594 07:34:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:50.594 07:34:16 -- nvmf/common.sh@470 -- # waitforlisten 60300 00:06:50.594 07:34:16 -- common/autotest_common.sh@829 -- # '[' -z 60300 ']' 00:06:50.594 07:34:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.594 07:34:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.594 07:34:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.594 07:34:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.594 07:34:16 -- common/autotest_common.sh@10 -- # set +x 00:06:50.853 [2024-12-02 07:34:16.243373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.854 [2024-12-02 07:34:16.243905] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.854 [2024-12-02 07:34:16.375812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.854 [2024-12-02 07:34:16.426889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.854 [2024-12-02 07:34:16.427177] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.854 [2024-12-02 07:34:16.427239] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.854 [2024-12-02 07:34:16.427311] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.854 [2024-12-02 07:34:16.427686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.854 [2024-12-02 07:34:16.427855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.854 [2024-12-02 07:34:16.427862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.791 07:34:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.791 07:34:17 -- common/autotest_common.sh@862 -- # return 0 00:06:51.791 07:34:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:51.791 07:34:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.791 07:34:17 -- common/autotest_common.sh@10 -- # set +x 00:06:51.791 07:34:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.791 07:34:17 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:52.050 [2024-12-02 07:34:17.486997] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.050 07:34:17 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:52.308 07:34:17 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:52.308 07:34:17 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:52.566 07:34:18 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:52.566 07:34:18 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:52.824 07:34:18 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:53.083 07:34:18 -- target/nvmf_lvol.sh@29 -- # lvs=897abda5-60d1-4de7-8f4c-ccd7df523ed7 00:06:53.083 07:34:18 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 897abda5-60d1-4de7-8f4c-ccd7df523ed7 lvol 20 00:06:53.341 07:34:18 -- target/nvmf_lvol.sh@32 -- # lvol=a14ce71a-e05a-4534-bf98-c7603f3d9c78 00:06:53.341 07:34:18 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.600 07:34:18 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a14ce71a-e05a-4534-bf98-c7603f3d9c78 00:06:53.600 07:34:19 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:53.859 [2024-12-02 07:34:19.455045] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.859 07:34:19 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.118 07:34:19 -- target/nvmf_lvol.sh@42 -- # perf_pid=60370 00:06:54.118 07:34:19 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:54.118 07:34:19 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:55.497 07:34:20 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a14ce71a-e05a-4534-bf98-c7603f3d9c78 MY_SNAPSHOT 00:06:55.497 07:34:20 -- target/nvmf_lvol.sh@47 -- # snapshot=106bfa17-5254-4ef9-9342-5ac903303308 00:06:55.497 07:34:20 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a14ce71a-e05a-4534-bf98-c7603f3d9c78 30 00:06:55.756 07:34:21 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 106bfa17-5254-4ef9-9342-5ac903303308 MY_CLONE 00:06:56.016 07:34:21 -- target/nvmf_lvol.sh@49 -- # clone=0cf2c6a1-8447-495a-a8d3-e977ac192561 00:06:56.016 07:34:21 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0cf2c6a1-8447-495a-a8d3-e977ac192561 00:06:56.275 07:34:21 -- target/nvmf_lvol.sh@53 -- # wait 60370 00:07:04.395 Initializing NVMe Controllers 00:07:04.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:04.395 Controller IO queue size 128, less than required. 00:07:04.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:04.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:04.395 Initialization complete. Launching workers. 00:07:04.395 ======================================================== 00:07:04.395 Latency(us) 00:07:04.395 Device Information : IOPS MiB/s Average min max 00:07:04.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10684.60 41.74 11988.34 1773.17 54578.59 00:07:04.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10796.40 42.17 11860.81 2589.42 70548.68 00:07:04.395 ======================================================== 00:07:04.395 Total : 21481.00 83.91 11924.24 1773.17 70548.68 00:07:04.395 00:07:04.395 07:34:29 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:04.654 07:34:30 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a14ce71a-e05a-4534-bf98-c7603f3d9c78 00:07:04.913 07:34:30 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 897abda5-60d1-4de7-8f4c-ccd7df523ed7 00:07:05.173 07:34:30 -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:05.173 07:34:30 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:05.173 07:34:30 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:05.173 07:34:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:05.173 07:34:30 -- nvmf/common.sh@116 -- # sync 00:07:05.173 07:34:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:05.173 07:34:30 -- nvmf/common.sh@119 -- # set +e 00:07:05.173 07:34:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:05.173 07:34:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:05.173 rmmod nvme_tcp 00:07:05.173 rmmod nvme_fabrics 00:07:05.173 rmmod nvme_keyring 00:07:05.173 07:34:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:05.173 07:34:30 -- nvmf/common.sh@123 -- # set -e 00:07:05.173 07:34:30 -- nvmf/common.sh@124 -- # return 0 00:07:05.173 07:34:30 -- nvmf/common.sh@477 -- # '[' -n 60300 ']' 00:07:05.173 07:34:30 -- nvmf/common.sh@478 -- # killprocess 60300 00:07:05.173 07:34:30 -- common/autotest_common.sh@936 -- # '[' -z 60300 ']' 00:07:05.173 07:34:30 -- common/autotest_common.sh@940 -- # kill -0 60300 00:07:05.173 07:34:30 -- common/autotest_common.sh@941 -- # uname 00:07:05.173 07:34:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.173 07:34:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60300 00:07:05.173 07:34:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:05.173 07:34:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:05.173 killing process with pid 60300 00:07:05.173 07:34:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60300' 00:07:05.173 07:34:30 -- common/autotest_common.sh@955 -- # kill 60300 00:07:05.173 07:34:30 -- common/autotest_common.sh@960 -- # wait 60300 00:07:05.433 07:34:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:05.433 07:34:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:05.433 07:34:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:05.433 07:34:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:05.433 07:34:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:05.433 07:34:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.433 07:34:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.433 07:34:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.433 07:34:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:05.433 00:07:05.433 real 0m15.340s 00:07:05.433 user 1m3.572s 00:07:05.433 sys 0m4.511s 00:07:05.433 07:34:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.433 07:34:30 -- common/autotest_common.sh@10 -- # set +x 00:07:05.433 ************************************ 00:07:05.433 END TEST nvmf_lvol 00:07:05.433 ************************************ 00:07:05.433 07:34:31 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.433 07:34:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.433 07:34:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.433 07:34:31 -- common/autotest_common.sh@10 -- # set +x 00:07:05.433 ************************************ 00:07:05.433 START TEST nvmf_lvs_grow 00:07:05.433 ************************************ 00:07:05.433 07:34:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.693 * Looking for test storage... 00:07:05.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:05.693 07:34:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.693 07:34:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.693 07:34:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.693 07:34:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.693 07:34:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.693 07:34:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.693 07:34:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.693 07:34:31 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.693 07:34:31 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.693 07:34:31 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.693 07:34:31 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.693 07:34:31 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.693 07:34:31 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.693 07:34:31 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.693 07:34:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.693 07:34:31 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.693 07:34:31 -- scripts/common.sh@344 -- # : 1 00:07:05.693 07:34:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.693 07:34:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.693 07:34:31 -- scripts/common.sh@364 -- # decimal 1 00:07:05.693 07:34:31 -- scripts/common.sh@352 -- # local d=1 00:07:05.693 07:34:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.693 07:34:31 -- scripts/common.sh@354 -- # echo 1 00:07:05.693 07:34:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.693 07:34:31 -- scripts/common.sh@365 -- # decimal 2 00:07:05.693 07:34:31 -- scripts/common.sh@352 -- # local d=2 00:07:05.693 07:34:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.693 07:34:31 -- scripts/common.sh@354 -- # echo 2 00:07:05.693 07:34:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.693 07:34:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.693 07:34:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.693 07:34:31 -- scripts/common.sh@367 -- # return 0 00:07:05.693 07:34:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.693 07:34:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 00:07:05.693 ' 00:07:05.693 07:34:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 00:07:05.693 ' 00:07:05.693 07:34:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 00:07:05.693 ' 00:07:05.693 07:34:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.693 --rc genhtml_branch_coverage=1 00:07:05.693 --rc genhtml_function_coverage=1 00:07:05.693 --rc genhtml_legend=1 00:07:05.693 --rc geninfo_all_blocks=1 00:07:05.693 --rc geninfo_unexecuted_blocks=1 00:07:05.693 00:07:05.693 ' 00:07:05.693 07:34:31 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:05.693 07:34:31 -- nvmf/common.sh@7 -- # uname -s 00:07:05.693 07:34:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.693 07:34:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.693 07:34:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.693 07:34:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.693 07:34:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.693 07:34:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.693 07:34:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.693 07:34:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.693 07:34:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.693 07:34:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.693 07:34:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:07:05.693 07:34:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:07:05.693 07:34:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.693 07:34:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.693 07:34:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:05.693 07:34:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.693 07:34:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.693 07:34:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.693 07:34:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.693 07:34:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.693 07:34:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.693 07:34:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.693 07:34:31 -- paths/export.sh@5 -- # export PATH 00:07:05.693 07:34:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.693 07:34:31 -- nvmf/common.sh@46 -- # : 0 00:07:05.693 07:34:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:05.693 07:34:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:05.693 07:34:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:05.693 07:34:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.693 07:34:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.693 07:34:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:05.693 07:34:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:05.693 07:34:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:05.693 07:34:31 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.693 07:34:31 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:05.693 07:34:31 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:07:05.693 07:34:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:05.693 07:34:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.693 07:34:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:05.693 07:34:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:05.693 07:34:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:05.693 07:34:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.693 07:34:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.693 07:34:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.693 07:34:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:05.693 07:34:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:05.693 07:34:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:05.693 07:34:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:05.693 07:34:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:05.693 07:34:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:05.693 07:34:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.693 07:34:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.693 07:34:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:05.693 07:34:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:05.693 07:34:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:05.693 07:34:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:05.693 07:34:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:05.693 07:34:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.693 07:34:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:05.693 07:34:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:05.693 07:34:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:05.693 07:34:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:05.693 07:34:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:05.693 07:34:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:05.693 Cannot find device "nvmf_tgt_br" 00:07:05.693 07:34:31 -- nvmf/common.sh@154 -- # true 00:07:05.693 07:34:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:05.693 Cannot find device "nvmf_tgt_br2" 00:07:05.693 07:34:31 -- nvmf/common.sh@155 -- # true 00:07:05.693 07:34:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:05.693 07:34:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:05.693 Cannot find device "nvmf_tgt_br" 00:07:05.693 07:34:31 -- nvmf/common.sh@157 -- # true 00:07:05.693 07:34:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:05.693 Cannot find device "nvmf_tgt_br2" 00:07:05.693 07:34:31 -- nvmf/common.sh@158 -- # true 00:07:05.693 07:34:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:05.953 07:34:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:05.953 07:34:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:05.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.953 07:34:31 -- nvmf/common.sh@161 -- # true 00:07:05.953 07:34:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:05.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.953 07:34:31 -- nvmf/common.sh@162 -- # true 00:07:05.953 07:34:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:05.953 07:34:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:05.953 07:34:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:05.953 07:34:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:05.953 07:34:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:05.953 07:34:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:05.953 07:34:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:05.953 07:34:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:05.953 07:34:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:05.953 07:34:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:05.953 07:34:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:05.953 07:34:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:05.953 07:34:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:05.953 07:34:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:05.953 07:34:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:05.953 07:34:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:05.953 07:34:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:05.953 07:34:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:05.953 07:34:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:05.953 07:34:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:05.953 07:34:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:05.953 07:34:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:05.953 07:34:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:05.953 07:34:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:05.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:05.953 00:07:05.953 --- 10.0.0.2 ping statistics --- 00:07:05.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.953 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:05.953 07:34:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:05.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:05.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:07:05.953 00:07:05.953 --- 10.0.0.3 ping statistics --- 00:07:05.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.953 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:05.953 07:34:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:05.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:07:05.953 00:07:05.953 --- 10.0.0.1 ping statistics --- 00:07:05.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.953 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:05.953 07:34:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.953 07:34:31 -- nvmf/common.sh@421 -- # return 0 00:07:05.953 07:34:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:05.953 07:34:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.953 07:34:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:05.953 07:34:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:05.953 07:34:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.953 07:34:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:05.953 07:34:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:05.953 07:34:31 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:07:05.953 07:34:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:05.953 07:34:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.953 07:34:31 -- common/autotest_common.sh@10 -- # set +x 00:07:05.953 07:34:31 -- nvmf/common.sh@469 -- # nvmfpid=60702 00:07:05.953 07:34:31 -- nvmf/common.sh@470 -- # waitforlisten 60702 00:07:05.953 07:34:31 -- common/autotest_common.sh@829 -- # '[' -z 60702 ']' 00:07:05.953 07:34:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:05.953 07:34:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.953 07:34:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.953 07:34:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.953 07:34:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.953 07:34:31 -- common/autotest_common.sh@10 -- # set +x 00:07:06.212 [2024-12-02 07:34:31.592705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.212 [2024-12-02 07:34:31.592779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.212 [2024-12-02 07:34:31.724471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.212 [2024-12-02 07:34:31.773008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.212 [2024-12-02 07:34:31.773145] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.212 [2024-12-02 07:34:31.773157] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.212 [2024-12-02 07:34:31.773164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.212 [2024-12-02 07:34:31.773187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.149 07:34:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.149 07:34:32 -- common/autotest_common.sh@862 -- # return 0 00:07:07.149 07:34:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:07.149 07:34:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.149 07:34:32 -- common/autotest_common.sh@10 -- # set +x 00:07:07.149 07:34:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.149 07:34:32 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:07.408 [2024-12-02 07:34:32.890060] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:07:07.408 07:34:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.408 07:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.408 07:34:32 -- common/autotest_common.sh@10 -- # set +x 00:07:07.408 ************************************ 00:07:07.408 START TEST lvs_grow_clean 00:07:07.408 ************************************ 00:07:07.408 07:34:32 -- common/autotest_common.sh@1114 -- # lvs_grow 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:07.408 07:34:32 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.666 07:34:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:07.666 07:34:33 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:07.925 07:34:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9e292055-7a7f-413c-b322-a533f540a410 00:07:07.925 07:34:33 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:07.925 07:34:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:08.185 07:34:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:08.185 07:34:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:08.185 07:34:33 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9e292055-7a7f-413c-b322-a533f540a410 lvol 150 00:07:08.444 07:34:33 -- target/nvmf_lvs_grow.sh@33 -- # lvol=74eb2ae8-c747-43d3-827a-712a7fe2ff75 00:07:08.444 07:34:33 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:08.444 07:34:33 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:08.703 [2024-12-02 07:34:34.075048] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:08.703 [2024-12-02 07:34:34.075123] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:08.703 true 00:07:08.703 07:34:34 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:08.703 07:34:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:08.962 07:34:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:08.962 07:34:34 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.962 07:34:34 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74eb2ae8-c747-43d3-827a-712a7fe2ff75 00:07:09.220 07:34:34 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.478 [2024-12-02 07:34:34.939511] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.478 07:34:34 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.737 07:34:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60786 00:07:09.737 07:34:35 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:09.737 07:34:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.737 07:34:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60786 /var/tmp/bdevperf.sock 00:07:09.737 07:34:35 -- common/autotest_common.sh@829 -- # '[' -z 60786 ']' 00:07:09.737 07:34:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.737 07:34:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.737 07:34:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.737 07:34:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.737 07:34:35 -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 [2024-12-02 07:34:35.223930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.737 [2024-12-02 07:34:35.224006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60786 ] 00:07:09.737 [2024-12-02 07:34:35.351973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.995 [2024-12-02 07:34:35.402588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.564 07:34:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.564 07:34:36 -- common/autotest_common.sh@862 -- # return 0 00:07:10.564 07:34:36 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:11.131 Nvme0n1 00:07:11.131 07:34:36 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:11.390 [ 00:07:11.391 { 00:07:11.391 "name": "Nvme0n1", 00:07:11.391 "aliases": [ 00:07:11.391 "74eb2ae8-c747-43d3-827a-712a7fe2ff75" 00:07:11.391 ], 00:07:11.391 "product_name": "NVMe disk", 00:07:11.391 "block_size": 4096, 00:07:11.391 "num_blocks": 38912, 00:07:11.391 "uuid": "74eb2ae8-c747-43d3-827a-712a7fe2ff75", 00:07:11.391 "assigned_rate_limits": { 00:07:11.391 "rw_ios_per_sec": 0, 00:07:11.391 "rw_mbytes_per_sec": 0, 00:07:11.391 "r_mbytes_per_sec": 0, 00:07:11.391 "w_mbytes_per_sec": 0 00:07:11.391 }, 00:07:11.391 "claimed": false, 00:07:11.391 "zoned": false, 00:07:11.391 "supported_io_types": { 00:07:11.391 "read": true, 00:07:11.391 "write": true, 00:07:11.391 "unmap": true, 00:07:11.391 "write_zeroes": true, 00:07:11.391 "flush": true, 00:07:11.391 "reset": true, 00:07:11.391 "compare": true, 00:07:11.391 "compare_and_write": true, 00:07:11.391 "abort": true, 00:07:11.391 "nvme_admin": true, 00:07:11.391 "nvme_io": true 00:07:11.391 }, 00:07:11.391 "driver_specific": { 00:07:11.391 "nvme": [ 00:07:11.391 { 00:07:11.391 "trid": { 00:07:11.391 "trtype": "TCP", 00:07:11.391 "adrfam": "IPv4", 00:07:11.391 "traddr": "10.0.0.2", 00:07:11.391 "trsvcid": "4420", 00:07:11.391 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:11.391 }, 00:07:11.391 "ctrlr_data": { 00:07:11.391 "cntlid": 1, 00:07:11.391 "vendor_id": "0x8086", 00:07:11.391 "model_number": "SPDK bdev Controller", 00:07:11.391 "serial_number": "SPDK0", 00:07:11.391 "firmware_revision": "24.01.1", 00:07:11.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.391 "oacs": { 00:07:11.391 "security": 0, 00:07:11.391 "format": 0, 00:07:11.391 "firmware": 0, 00:07:11.391 "ns_manage": 0 00:07:11.391 }, 00:07:11.391 "multi_ctrlr": true, 00:07:11.391 "ana_reporting": false 00:07:11.391 }, 00:07:11.391 "vs": { 00:07:11.391 "nvme_version": "1.3" 00:07:11.391 }, 00:07:11.391 "ns_data": { 00:07:11.391 "id": 1, 00:07:11.391 "can_share": true 00:07:11.391 } 00:07:11.391 } 00:07:11.391 ], 00:07:11.391 "mp_policy": "active_passive" 00:07:11.391 } 00:07:11.391 } 00:07:11.391 ] 00:07:11.391 07:34:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60804 00:07:11.391 07:34:36 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:11.391 07:34:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:11.391 Running I/O for 10 seconds... 00:07:12.328 Latency(us) 00:07:12.328 [2024-12-02T07:34:37.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.328 [2024-12-02T07:34:37.952Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.328 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:12.328 [2024-12-02T07:34:37.952Z] =================================================================================================================== 00:07:12.328 [2024-12-02T07:34:37.952Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:12.328 00:07:13.266 07:34:38 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:13.525 [2024-12-02T07:34:39.149Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.525 Nvme0n1 : 2.00 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:13.525 [2024-12-02T07:34:39.149Z] =================================================================================================================== 00:07:13.525 [2024-12-02T07:34:39.149Z] Total : 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:13.525 00:07:13.525 true 00:07:13.525 07:34:39 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:13.525 07:34:39 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:14.094 07:34:39 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:14.094 07:34:39 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:14.094 07:34:39 -- target/nvmf_lvs_grow.sh@65 -- # wait 60804 00:07:14.353 [2024-12-02T07:34:39.977Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.353 Nvme0n1 : 3.00 6307.67 24.64 0.00 0.00 0.00 0.00 0.00 00:07:14.353 [2024-12-02T07:34:39.978Z] =================================================================================================================== 00:07:14.354 [2024-12-02T07:34:39.978Z] Total : 6307.67 24.64 0.00 0.00 0.00 0.00 0.00 00:07:14.354 00:07:15.732 [2024-12-02T07:34:41.356Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.732 Nvme0n1 : 4.00 6318.25 24.68 0.00 0.00 0.00 0.00 0.00 00:07:15.732 [2024-12-02T07:34:41.356Z] =================================================================================================================== 00:07:15.732 [2024-12-02T07:34:41.356Z] Total : 6318.25 24.68 0.00 0.00 0.00 0.00 0.00 00:07:15.732 00:07:16.299 [2024-12-02T07:34:41.923Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.299 Nvme0n1 : 5.00 6325.20 24.71 0.00 0.00 0.00 0.00 0.00 00:07:16.299 [2024-12-02T07:34:41.923Z] =================================================================================================================== 00:07:16.299 [2024-12-02T07:34:41.923Z] Total : 6325.20 24.71 0.00 0.00 0.00 0.00 0.00 00:07:16.299 00:07:17.677 [2024-12-02T07:34:43.301Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.677 Nvme0n1 : 6.00 6308.17 24.64 0.00 0.00 0.00 0.00 0.00 00:07:17.677 [2024-12-02T07:34:43.301Z] =================================================================================================================== 00:07:17.677 [2024-12-02T07:34:43.301Z] Total : 6308.17 24.64 0.00 0.00 0.00 0.00 0.00 00:07:17.677 00:07:18.615 [2024-12-02T07:34:44.239Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.615 Nvme0n1 : 7.00 6296.00 24.59 0.00 0.00 0.00 0.00 0.00 00:07:18.615 [2024-12-02T07:34:44.239Z] =================================================================================================================== 00:07:18.615 [2024-12-02T07:34:44.239Z] Total : 6296.00 24.59 0.00 0.00 0.00 0.00 0.00 00:07:18.615 00:07:19.627 [2024-12-02T07:34:45.251Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.627 Nvme0n1 : 8.00 6286.88 24.56 0.00 0.00 0.00 0.00 0.00 00:07:19.627 [2024-12-02T07:34:45.251Z] =================================================================================================================== 00:07:19.627 [2024-12-02T07:34:45.251Z] Total : 6286.88 24.56 0.00 0.00 0.00 0.00 0.00 00:07:19.627 00:07:20.564 [2024-12-02T07:34:46.188Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.564 Nvme0n1 : 9.00 6279.78 24.53 0.00 0.00 0.00 0.00 0.00 00:07:20.564 [2024-12-02T07:34:46.188Z] =================================================================================================================== 00:07:20.564 [2024-12-02T07:34:46.188Z] Total : 6279.78 24.53 0.00 0.00 0.00 0.00 0.00 00:07:20.564 00:07:21.498 [2024-12-02T07:34:47.122Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.498 Nvme0n1 : 10.00 6261.40 24.46 0.00 0.00 0.00 0.00 0.00 00:07:21.498 [2024-12-02T07:34:47.122Z] =================================================================================================================== 00:07:21.498 [2024-12-02T07:34:47.122Z] Total : 6261.40 24.46 0.00 0.00 0.00 0.00 0.00 00:07:21.498 00:07:21.498 00:07:21.498 Latency(us) 00:07:21.498 [2024-12-02T07:34:47.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.498 [2024-12-02T07:34:47.122Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.498 Nvme0n1 : 10.00 6271.19 24.50 0.00 0.00 20406.07 12571.00 75306.82 00:07:21.498 [2024-12-02T07:34:47.122Z] =================================================================================================================== 00:07:21.498 [2024-12-02T07:34:47.122Z] Total : 6271.19 24.50 0.00 0.00 20406.07 12571.00 75306.82 00:07:21.498 0 00:07:21.498 07:34:46 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60786 00:07:21.498 07:34:46 -- common/autotest_common.sh@936 -- # '[' -z 60786 ']' 00:07:21.498 07:34:46 -- common/autotest_common.sh@940 -- # kill -0 60786 00:07:21.498 07:34:46 -- common/autotest_common.sh@941 -- # uname 00:07:21.498 07:34:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:21.498 07:34:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60786 00:07:21.498 07:34:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:21.498 07:34:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:21.498 killing process with pid 60786 00:07:21.498 07:34:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60786' 00:07:21.498 Received shutdown signal, test time was about 10.000000 seconds 00:07:21.498 00:07:21.498 Latency(us) 00:07:21.498 [2024-12-02T07:34:47.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.499 [2024-12-02T07:34:47.123Z] =================================================================================================================== 00:07:21.499 [2024-12-02T07:34:47.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:21.499 07:34:46 -- common/autotest_common.sh@955 -- # kill 60786 00:07:21.499 07:34:46 -- common/autotest_common.sh@960 -- # wait 60786 00:07:21.756 07:34:47 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:22.014 07:34:47 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:22.014 07:34:47 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:07:22.271 07:34:47 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:07:22.271 07:34:47 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:07:22.271 07:34:47 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.271 [2024-12-02 07:34:47.882899] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:22.529 07:34:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:22.529 07:34:47 -- common/autotest_common.sh@650 -- # local es=0 00:07:22.529 07:34:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:22.529 07:34:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.529 07:34:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.529 07:34:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.529 07:34:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.529 07:34:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.529 07:34:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.529 07:34:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.529 07:34:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:22.529 07:34:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:22.529 request: 00:07:22.529 { 00:07:22.529 "uuid": "9e292055-7a7f-413c-b322-a533f540a410", 00:07:22.529 "method": "bdev_lvol_get_lvstores", 00:07:22.529 "req_id": 1 00:07:22.529 } 00:07:22.529 Got JSON-RPC error response 00:07:22.529 response: 00:07:22.529 { 00:07:22.529 "code": -19, 00:07:22.529 "message": "No such device" 00:07:22.529 } 00:07:22.529 07:34:48 -- common/autotest_common.sh@653 -- # es=1 00:07:22.530 07:34:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.530 07:34:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.530 07:34:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.530 07:34:48 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.788 aio_bdev 00:07:22.788 07:34:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 74eb2ae8-c747-43d3-827a-712a7fe2ff75 00:07:22.788 07:34:48 -- common/autotest_common.sh@897 -- # local bdev_name=74eb2ae8-c747-43d3-827a-712a7fe2ff75 00:07:22.788 07:34:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:22.788 07:34:48 -- common/autotest_common.sh@899 -- # local i 00:07:22.788 07:34:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:22.788 07:34:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:22.788 07:34:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:23.047 07:34:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74eb2ae8-c747-43d3-827a-712a7fe2ff75 -t 2000 00:07:23.305 [ 00:07:23.305 { 00:07:23.305 "name": "74eb2ae8-c747-43d3-827a-712a7fe2ff75", 00:07:23.305 "aliases": [ 00:07:23.305 "lvs/lvol" 00:07:23.305 ], 00:07:23.305 "product_name": "Logical Volume", 00:07:23.305 "block_size": 4096, 00:07:23.305 "num_blocks": 38912, 00:07:23.305 "uuid": "74eb2ae8-c747-43d3-827a-712a7fe2ff75", 00:07:23.305 "assigned_rate_limits": { 00:07:23.305 "rw_ios_per_sec": 0, 00:07:23.305 "rw_mbytes_per_sec": 0, 00:07:23.305 "r_mbytes_per_sec": 0, 00:07:23.305 "w_mbytes_per_sec": 0 00:07:23.305 }, 00:07:23.305 "claimed": false, 00:07:23.305 "zoned": false, 00:07:23.305 "supported_io_types": { 00:07:23.305 "read": true, 00:07:23.305 "write": true, 00:07:23.305 "unmap": true, 00:07:23.305 "write_zeroes": true, 00:07:23.305 "flush": false, 00:07:23.305 "reset": true, 00:07:23.305 "compare": false, 00:07:23.305 "compare_and_write": false, 00:07:23.305 "abort": false, 00:07:23.305 "nvme_admin": false, 00:07:23.305 "nvme_io": false 00:07:23.305 }, 00:07:23.305 "driver_specific": { 00:07:23.306 "lvol": { 00:07:23.306 "lvol_store_uuid": "9e292055-7a7f-413c-b322-a533f540a410", 00:07:23.306 "base_bdev": "aio_bdev", 00:07:23.306 "thin_provision": false, 00:07:23.306 "snapshot": false, 00:07:23.306 "clone": false, 00:07:23.306 "esnap_clone": false 00:07:23.306 } 00:07:23.306 } 00:07:23.306 } 00:07:23.306 ] 00:07:23.306 07:34:48 -- common/autotest_common.sh@905 -- # return 0 00:07:23.306 07:34:48 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:23.306 07:34:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:07:23.564 07:34:49 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:07:23.564 07:34:49 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:23.564 07:34:49 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:07:23.822 07:34:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:07:23.822 07:34:49 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 74eb2ae8-c747-43d3-827a-712a7fe2ff75 00:07:24.081 07:34:49 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e292055-7a7f-413c-b322-a533f540a410 00:07:24.340 07:34:49 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.598 07:34:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:25.164 ************************************ 00:07:25.164 END TEST lvs_grow_clean 00:07:25.164 ************************************ 00:07:25.164 00:07:25.164 real 0m17.582s 00:07:25.164 user 0m16.882s 00:07:25.164 sys 0m2.236s 00:07:25.164 07:34:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.164 07:34:50 -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:25.164 07:34:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.164 07:34:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.164 07:34:50 -- common/autotest_common.sh@10 -- # set +x 00:07:25.164 ************************************ 00:07:25.164 START TEST lvs_grow_dirty 00:07:25.164 ************************************ 00:07:25.164 07:34:50 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:25.164 07:34:50 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.421 07:34:50 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:25.421 07:34:50 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:25.679 07:34:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:25.679 07:34:51 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:25.679 07:34:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:25.937 07:34:51 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:25.937 07:34:51 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:25.938 07:34:51 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 lvol 150 00:07:26.196 07:34:51 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0aac1da2-45e9-4f29-bf6e-5124f6e1cacf 00:07:26.196 07:34:51 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:26.196 07:34:51 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:26.455 [2024-12-02 07:34:51.819586] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:26.455 [2024-12-02 07:34:51.819698] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:26.455 true 00:07:26.455 07:34:51 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:26.455 07:34:51 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:26.456 07:34:52 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:26.456 07:34:52 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:26.714 07:34:52 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0aac1da2-45e9-4f29-bf6e-5124f6e1cacf 00:07:26.973 07:34:52 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.232 07:34:52 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:27.492 07:34:52 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61049 00:07:27.492 07:34:52 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:27.492 07:34:52 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.492 07:34:52 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61049 /var/tmp/bdevperf.sock 00:07:27.492 07:34:52 -- common/autotest_common.sh@829 -- # '[' -z 61049 ']' 00:07:27.492 07:34:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:27.492 07:34:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.492 07:34:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:27.492 07:34:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.492 07:34:52 -- common/autotest_common.sh@10 -- # set +x 00:07:27.492 [2024-12-02 07:34:52.989775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.492 [2024-12-02 07:34:52.989859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61049 ] 00:07:27.751 [2024-12-02 07:34:53.126627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.751 [2024-12-02 07:34:53.192863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.318 07:34:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.318 07:34:53 -- common/autotest_common.sh@862 -- # return 0 00:07:28.318 07:34:53 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:28.577 Nvme0n1 00:07:28.577 07:34:54 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:28.836 [ 00:07:28.836 { 00:07:28.836 "name": "Nvme0n1", 00:07:28.836 "aliases": [ 00:07:28.836 "0aac1da2-45e9-4f29-bf6e-5124f6e1cacf" 00:07:28.836 ], 00:07:28.836 "product_name": "NVMe disk", 00:07:28.836 "block_size": 4096, 00:07:28.836 "num_blocks": 38912, 00:07:28.836 "uuid": "0aac1da2-45e9-4f29-bf6e-5124f6e1cacf", 00:07:28.836 "assigned_rate_limits": { 00:07:28.836 "rw_ios_per_sec": 0, 00:07:28.836 "rw_mbytes_per_sec": 0, 00:07:28.836 "r_mbytes_per_sec": 0, 00:07:28.836 "w_mbytes_per_sec": 0 00:07:28.836 }, 00:07:28.836 "claimed": false, 00:07:28.836 "zoned": false, 00:07:28.836 "supported_io_types": { 00:07:28.836 "read": true, 00:07:28.836 "write": true, 00:07:28.836 "unmap": true, 00:07:28.836 "write_zeroes": true, 00:07:28.836 "flush": true, 00:07:28.836 "reset": true, 00:07:28.836 "compare": true, 00:07:28.836 "compare_and_write": true, 00:07:28.836 "abort": true, 00:07:28.836 "nvme_admin": true, 00:07:28.836 "nvme_io": true 00:07:28.836 }, 00:07:28.836 "driver_specific": { 00:07:28.836 "nvme": [ 00:07:28.836 { 00:07:28.836 "trid": { 00:07:28.836 "trtype": "TCP", 00:07:28.836 "adrfam": "IPv4", 00:07:28.836 "traddr": "10.0.0.2", 00:07:28.836 "trsvcid": "4420", 00:07:28.836 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:28.836 }, 00:07:28.836 "ctrlr_data": { 00:07:28.836 "cntlid": 1, 00:07:28.836 "vendor_id": "0x8086", 00:07:28.836 "model_number": "SPDK bdev Controller", 00:07:28.836 "serial_number": "SPDK0", 00:07:28.836 "firmware_revision": "24.01.1", 00:07:28.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:28.836 "oacs": { 00:07:28.836 "security": 0, 00:07:28.836 "format": 0, 00:07:28.836 "firmware": 0, 00:07:28.836 "ns_manage": 0 00:07:28.836 }, 00:07:28.836 "multi_ctrlr": true, 00:07:28.836 "ana_reporting": false 00:07:28.836 }, 00:07:28.836 "vs": { 00:07:28.836 "nvme_version": "1.3" 00:07:28.836 }, 00:07:28.836 "ns_data": { 00:07:28.836 "id": 1, 00:07:28.836 "can_share": true 00:07:28.836 } 00:07:28.836 } 00:07:28.836 ], 00:07:28.836 "mp_policy": "active_passive" 00:07:28.836 } 00:07:28.836 } 00:07:28.836 ] 00:07:28.836 07:34:54 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61067 00:07:28.836 07:34:54 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:28.836 07:34:54 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.095 Running I/O for 10 seconds... 00:07:30.044 Latency(us) 00:07:30.044 [2024-12-02T07:34:55.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.044 [2024-12-02T07:34:55.668Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.044 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:30.044 [2024-12-02T07:34:55.668Z] =================================================================================================================== 00:07:30.044 [2024-12-02T07:34:55.668Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:07:30.044 00:07:30.980 07:34:56 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:30.980 [2024-12-02T07:34:56.604Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.980 Nvme0n1 : 2.00 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:30.980 [2024-12-02T07:34:56.604Z] =================================================================================================================== 00:07:30.980 [2024-12-02T07:34:56.604Z] Total : 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:30.980 00:07:31.239 true 00:07:31.239 07:34:56 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:31.239 07:34:56 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:31.498 07:34:57 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:31.498 07:34:57 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:31.498 07:34:57 -- target/nvmf_lvs_grow.sh@65 -- # wait 61067 00:07:32.065 [2024-12-02T07:34:57.689Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.065 Nvme0n1 : 3.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:32.065 [2024-12-02T07:34:57.689Z] =================================================================================================================== 00:07:32.065 [2024-12-02T07:34:57.689Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:32.065 00:07:33.002 [2024-12-02T07:34:58.626Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.002 Nvme0n1 : 4.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:33.002 [2024-12-02T07:34:58.626Z] =================================================================================================================== 00:07:33.002 [2024-12-02T07:34:58.626Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:33.002 00:07:33.938 [2024-12-02T07:34:59.562Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.938 Nvme0n1 : 5.00 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:07:33.938 [2024-12-02T07:34:59.562Z] =================================================================================================================== 00:07:33.938 [2024-12-02T07:34:59.562Z] Total : 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:07:33.938 00:07:34.873 [2024-12-02T07:35:00.498Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.874 Nvme0n1 : 6.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:34.874 [2024-12-02T07:35:00.498Z] =================================================================================================================== 00:07:34.874 [2024-12-02T07:35:00.498Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:34.874 00:07:36.251 [2024-12-02T07:35:01.875Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.251 Nvme0n1 : 7.00 6404.43 25.02 0.00 0.00 0.00 0.00 0.00 00:07:36.251 [2024-12-02T07:35:01.875Z] =================================================================================================================== 00:07:36.251 [2024-12-02T07:35:01.875Z] Total : 6404.43 25.02 0.00 0.00 0.00 0.00 0.00 00:07:36.251 00:07:37.188 [2024-12-02T07:35:02.812Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.188 Nvme0n1 : 8.00 6251.88 24.42 0.00 0.00 0.00 0.00 0.00 00:07:37.188 [2024-12-02T07:35:02.812Z] =================================================================================================================== 00:07:37.188 [2024-12-02T07:35:02.812Z] Total : 6251.88 24.42 0.00 0.00 0.00 0.00 0.00 00:07:37.188 00:07:38.123 [2024-12-02T07:35:03.747Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.123 Nvme0n1 : 9.00 6234.56 24.35 0.00 0.00 0.00 0.00 0.00 00:07:38.123 [2024-12-02T07:35:03.747Z] =================================================================================================================== 00:07:38.123 [2024-12-02T07:35:03.747Z] Total : 6234.56 24.35 0.00 0.00 0.00 0.00 0.00 00:07:38.123 00:07:39.113 [2024-12-02T07:35:04.737Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.113 Nvme0n1 : 10.00 6233.40 24.35 0.00 0.00 0.00 0.00 0.00 00:07:39.113 [2024-12-02T07:35:04.737Z] =================================================================================================================== 00:07:39.113 [2024-12-02T07:35:04.737Z] Total : 6233.40 24.35 0.00 0.00 0.00 0.00 0.00 00:07:39.113 00:07:39.113 00:07:39.113 Latency(us) 00:07:39.113 [2024-12-02T07:35:04.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.113 [2024-12-02T07:35:04.737Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.113 Nvme0n1 : 10.01 6237.36 24.36 0.00 0.00 20514.91 10187.87 239265.98 00:07:39.113 [2024-12-02T07:35:04.737Z] =================================================================================================================== 00:07:39.113 [2024-12-02T07:35:04.737Z] Total : 6237.36 24.36 0.00 0.00 20514.91 10187.87 239265.98 00:07:39.113 0 00:07:39.113 07:35:04 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61049 00:07:39.113 07:35:04 -- common/autotest_common.sh@936 -- # '[' -z 61049 ']' 00:07:39.113 07:35:04 -- common/autotest_common.sh@940 -- # kill -0 61049 00:07:39.113 07:35:04 -- common/autotest_common.sh@941 -- # uname 00:07:39.113 07:35:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:39.113 07:35:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61049 00:07:39.113 killing process with pid 61049 00:07:39.113 Received shutdown signal, test time was about 10.000000 seconds 00:07:39.113 00:07:39.113 Latency(us) 00:07:39.113 [2024-12-02T07:35:04.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.113 [2024-12-02T07:35:04.737Z] =================================================================================================================== 00:07:39.113 [2024-12-02T07:35:04.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:39.113 07:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:39.113 07:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:39.113 07:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61049' 00:07:39.113 07:35:04 -- common/autotest_common.sh@955 -- # kill 61049 00:07:39.113 07:35:04 -- common/autotest_common.sh@960 -- # wait 61049 00:07:39.113 07:35:04 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60702 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@74 -- # wait 60702 00:07:39.677 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60702 Killed "${NVMF_APP[@]}" "$@" 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@74 -- # true 00:07:39.677 07:35:05 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:07:39.677 07:35:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:39.677 07:35:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.677 07:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:39.935 07:35:05 -- nvmf/common.sh@469 -- # nvmfpid=61199 00:07:39.935 07:35:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:39.935 07:35:05 -- nvmf/common.sh@470 -- # waitforlisten 61199 00:07:39.935 07:35:05 -- common/autotest_common.sh@829 -- # '[' -z 61199 ']' 00:07:39.935 07:35:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.935 07:35:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.935 07:35:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.935 07:35:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.935 07:35:05 -- common/autotest_common.sh@10 -- # set +x 00:07:39.935 [2024-12-02 07:35:05.359830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.935 [2024-12-02 07:35:05.359920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.936 [2024-12-02 07:35:05.497921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.936 [2024-12-02 07:35:05.545982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.936 [2024-12-02 07:35:05.546434] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.936 [2024-12-02 07:35:05.546456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.936 [2024-12-02 07:35:05.546465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.936 [2024-12-02 07:35:05.546500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.870 07:35:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.870 07:35:06 -- common/autotest_common.sh@862 -- # return 0 00:07:40.870 07:35:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:40.870 07:35:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.870 07:35:06 -- common/autotest_common.sh@10 -- # set +x 00:07:40.870 07:35:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.870 07:35:06 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.129 [2024-12-02 07:35:06.521640] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:41.129 [2024-12-02 07:35:06.522535] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:41.129 [2024-12-02 07:35:06.522942] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:41.129 07:35:06 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:07:41.129 07:35:06 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 0aac1da2-45e9-4f29-bf6e-5124f6e1cacf 00:07:41.129 07:35:06 -- common/autotest_common.sh@897 -- # local bdev_name=0aac1da2-45e9-4f29-bf6e-5124f6e1cacf 00:07:41.129 07:35:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:41.129 07:35:06 -- common/autotest_common.sh@899 -- # local i 00:07:41.129 07:35:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:41.129 07:35:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:41.129 07:35:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.388 07:35:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0aac1da2-45e9-4f29-bf6e-5124f6e1cacf -t 2000 00:07:41.388 [ 00:07:41.388 { 00:07:41.388 "name": "0aac1da2-45e9-4f29-bf6e-5124f6e1cacf", 00:07:41.388 "aliases": [ 00:07:41.388 "lvs/lvol" 00:07:41.388 ], 00:07:41.388 "product_name": "Logical Volume", 00:07:41.388 "block_size": 4096, 00:07:41.388 "num_blocks": 38912, 00:07:41.388 "uuid": "0aac1da2-45e9-4f29-bf6e-5124f6e1cacf", 00:07:41.388 "assigned_rate_limits": { 00:07:41.388 "rw_ios_per_sec": 0, 00:07:41.388 "rw_mbytes_per_sec": 0, 00:07:41.388 "r_mbytes_per_sec": 0, 00:07:41.388 "w_mbytes_per_sec": 0 00:07:41.388 }, 00:07:41.388 "claimed": false, 00:07:41.388 "zoned": false, 00:07:41.388 "supported_io_types": { 00:07:41.388 "read": true, 00:07:41.388 "write": true, 00:07:41.388 "unmap": true, 00:07:41.388 "write_zeroes": true, 00:07:41.388 "flush": false, 00:07:41.388 "reset": true, 00:07:41.388 "compare": false, 00:07:41.388 "compare_and_write": false, 00:07:41.388 "abort": false, 00:07:41.388 "nvme_admin": false, 00:07:41.388 "nvme_io": false 00:07:41.388 }, 00:07:41.388 "driver_specific": { 00:07:41.388 "lvol": { 00:07:41.388 "lvol_store_uuid": "54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5", 00:07:41.388 "base_bdev": "aio_bdev", 00:07:41.388 "thin_provision": false, 00:07:41.388 "snapshot": false, 00:07:41.388 "clone": false, 00:07:41.388 "esnap_clone": false 00:07:41.388 } 00:07:41.388 } 00:07:41.388 } 00:07:41.388 ] 00:07:41.388 07:35:06 -- common/autotest_common.sh@905 -- # return 0 00:07:41.388 07:35:06 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:07:41.388 07:35:06 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:41.647 07:35:07 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:07:41.647 07:35:07 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:41.647 07:35:07 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:07:41.906 07:35:07 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:07:41.906 07:35:07 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.165 [2024-12-02 07:35:07.676232] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:42.165 07:35:07 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:42.165 07:35:07 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.165 07:35:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:42.165 07:35:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.165 07:35:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.165 07:35:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.165 07:35:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.165 07:35:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.165 07:35:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.165 07:35:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.165 07:35:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:42.165 07:35:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:42.424 request: 00:07:42.424 { 00:07:42.424 "uuid": "54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5", 00:07:42.424 "method": "bdev_lvol_get_lvstores", 00:07:42.424 "req_id": 1 00:07:42.424 } 00:07:42.424 Got JSON-RPC error response 00:07:42.424 response: 00:07:42.424 { 00:07:42.424 "code": -19, 00:07:42.424 "message": "No such device" 00:07:42.424 } 00:07:42.424 07:35:07 -- common/autotest_common.sh@653 -- # es=1 00:07:42.424 07:35:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.424 07:35:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.424 07:35:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.424 07:35:07 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.682 aio_bdev 00:07:42.682 07:35:08 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0aac1da2-45e9-4f29-bf6e-5124f6e1cacf 00:07:42.682 07:35:08 -- common/autotest_common.sh@897 -- # local bdev_name=0aac1da2-45e9-4f29-bf6e-5124f6e1cacf 00:07:42.682 07:35:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:42.682 07:35:08 -- common/autotest_common.sh@899 -- # local i 00:07:42.682 07:35:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:42.682 07:35:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:42.682 07:35:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:42.940 07:35:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0aac1da2-45e9-4f29-bf6e-5124f6e1cacf -t 2000 00:07:43.199 [ 00:07:43.199 { 00:07:43.199 "name": "0aac1da2-45e9-4f29-bf6e-5124f6e1cacf", 00:07:43.199 "aliases": [ 00:07:43.199 "lvs/lvol" 00:07:43.199 ], 00:07:43.199 "product_name": "Logical Volume", 00:07:43.199 "block_size": 4096, 00:07:43.199 "num_blocks": 38912, 00:07:43.199 "uuid": "0aac1da2-45e9-4f29-bf6e-5124f6e1cacf", 00:07:43.199 "assigned_rate_limits": { 00:07:43.199 "rw_ios_per_sec": 0, 00:07:43.199 "rw_mbytes_per_sec": 0, 00:07:43.199 "r_mbytes_per_sec": 0, 00:07:43.199 "w_mbytes_per_sec": 0 00:07:43.199 }, 00:07:43.199 "claimed": false, 00:07:43.199 "zoned": false, 00:07:43.199 "supported_io_types": { 00:07:43.199 "read": true, 00:07:43.199 "write": true, 00:07:43.199 "unmap": true, 00:07:43.199 "write_zeroes": true, 00:07:43.199 "flush": false, 00:07:43.199 "reset": true, 00:07:43.199 "compare": false, 00:07:43.199 "compare_and_write": false, 00:07:43.199 "abort": false, 00:07:43.199 "nvme_admin": false, 00:07:43.199 "nvme_io": false 00:07:43.199 }, 00:07:43.199 "driver_specific": { 00:07:43.199 "lvol": { 00:07:43.199 "lvol_store_uuid": "54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5", 00:07:43.199 "base_bdev": "aio_bdev", 00:07:43.199 "thin_provision": false, 00:07:43.199 "snapshot": false, 00:07:43.199 "clone": false, 00:07:43.199 "esnap_clone": false 00:07:43.199 } 00:07:43.199 } 00:07:43.199 } 00:07:43.199 ] 00:07:43.199 07:35:08 -- common/autotest_common.sh@905 -- # return 0 00:07:43.199 07:35:08 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:07:43.199 07:35:08 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:43.458 07:35:08 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:07:43.458 07:35:08 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:07:43.458 07:35:08 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:43.717 07:35:09 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:07:43.717 07:35:09 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0aac1da2-45e9-4f29-bf6e-5124f6e1cacf 00:07:43.976 07:35:09 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54958d18-9bb2-4fbd-a3b0-82bb41e1f9a5 00:07:43.976 07:35:09 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.234 07:35:09 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:44.493 ************************************ 00:07:44.493 END TEST lvs_grow_dirty 00:07:44.493 ************************************ 00:07:44.493 00:07:44.493 real 0m19.546s 00:07:44.493 user 0m38.616s 00:07:44.493 sys 0m9.748s 00:07:44.493 07:35:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.493 07:35:10 -- common/autotest_common.sh@10 -- # set +x 00:07:44.752 07:35:10 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:44.752 07:35:10 -- common/autotest_common.sh@806 -- # type=--id 00:07:44.752 07:35:10 -- common/autotest_common.sh@807 -- # id=0 00:07:44.752 07:35:10 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:07:44.752 07:35:10 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:44.752 07:35:10 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:07:44.752 07:35:10 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:07:44.752 07:35:10 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:07:44.752 07:35:10 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:44.752 nvmf_trace.0 00:07:44.752 07:35:10 -- common/autotest_common.sh@821 -- # return 0 00:07:44.752 07:35:10 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:44.752 07:35:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:44.752 07:35:10 -- nvmf/common.sh@116 -- # sync 00:07:45.322 07:35:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:45.322 07:35:10 -- nvmf/common.sh@119 -- # set +e 00:07:45.322 07:35:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:45.322 07:35:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:45.322 rmmod nvme_tcp 00:07:45.322 rmmod nvme_fabrics 00:07:45.322 rmmod nvme_keyring 00:07:45.322 07:35:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:45.322 07:35:10 -- nvmf/common.sh@123 -- # set -e 00:07:45.322 07:35:10 -- nvmf/common.sh@124 -- # return 0 00:07:45.322 07:35:10 -- nvmf/common.sh@477 -- # '[' -n 61199 ']' 00:07:45.322 07:35:10 -- nvmf/common.sh@478 -- # killprocess 61199 00:07:45.322 07:35:10 -- common/autotest_common.sh@936 -- # '[' -z 61199 ']' 00:07:45.322 07:35:10 -- common/autotest_common.sh@940 -- # kill -0 61199 00:07:45.322 07:35:10 -- common/autotest_common.sh@941 -- # uname 00:07:45.322 07:35:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:45.322 07:35:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61199 00:07:45.322 killing process with pid 61199 00:07:45.322 07:35:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:45.322 07:35:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:45.322 07:35:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61199' 00:07:45.322 07:35:10 -- common/autotest_common.sh@955 -- # kill 61199 00:07:45.322 07:35:10 -- common/autotest_common.sh@960 -- # wait 61199 00:07:45.582 07:35:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:45.582 07:35:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:45.582 07:35:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:45.582 07:35:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.582 07:35:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:45.582 07:35:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.582 07:35:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.582 07:35:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.582 07:35:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:45.582 00:07:45.582 real 0m40.039s 00:07:45.582 user 1m1.936s 00:07:45.582 sys 0m13.039s 00:07:45.582 07:35:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.582 ************************************ 00:07:45.582 END TEST nvmf_lvs_grow 00:07:45.582 ************************************ 00:07:45.582 07:35:11 -- common/autotest_common.sh@10 -- # set +x 00:07:45.582 07:35:11 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:45.582 07:35:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.582 07:35:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.582 07:35:11 -- common/autotest_common.sh@10 -- # set +x 00:07:45.582 ************************************ 00:07:45.582 START TEST nvmf_bdev_io_wait 00:07:45.582 ************************************ 00:07:45.582 07:35:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:45.582 * Looking for test storage... 00:07:45.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.841 07:35:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:45.841 07:35:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:45.841 07:35:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:45.841 07:35:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:45.841 07:35:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:45.841 07:35:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:45.841 07:35:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:45.841 07:35:11 -- scripts/common.sh@335 -- # IFS=.-: 00:07:45.841 07:35:11 -- scripts/common.sh@335 -- # read -ra ver1 00:07:45.841 07:35:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.841 07:35:11 -- scripts/common.sh@336 -- # read -ra ver2 00:07:45.841 07:35:11 -- scripts/common.sh@337 -- # local 'op=<' 00:07:45.841 07:35:11 -- scripts/common.sh@339 -- # ver1_l=2 00:07:45.841 07:35:11 -- scripts/common.sh@340 -- # ver2_l=1 00:07:45.841 07:35:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:45.841 07:35:11 -- scripts/common.sh@343 -- # case "$op" in 00:07:45.841 07:35:11 -- scripts/common.sh@344 -- # : 1 00:07:45.841 07:35:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:45.841 07:35:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.841 07:35:11 -- scripts/common.sh@364 -- # decimal 1 00:07:45.841 07:35:11 -- scripts/common.sh@352 -- # local d=1 00:07:45.841 07:35:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.841 07:35:11 -- scripts/common.sh@354 -- # echo 1 00:07:45.841 07:35:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:45.841 07:35:11 -- scripts/common.sh@365 -- # decimal 2 00:07:45.841 07:35:11 -- scripts/common.sh@352 -- # local d=2 00:07:45.841 07:35:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.841 07:35:11 -- scripts/common.sh@354 -- # echo 2 00:07:45.841 07:35:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:45.841 07:35:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:45.841 07:35:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:45.841 07:35:11 -- scripts/common.sh@367 -- # return 0 00:07:45.841 07:35:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.841 07:35:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:45.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.841 --rc genhtml_branch_coverage=1 00:07:45.841 --rc genhtml_function_coverage=1 00:07:45.841 --rc genhtml_legend=1 00:07:45.841 --rc geninfo_all_blocks=1 00:07:45.841 --rc geninfo_unexecuted_blocks=1 00:07:45.841 00:07:45.841 ' 00:07:45.841 07:35:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:45.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.841 --rc genhtml_branch_coverage=1 00:07:45.841 --rc genhtml_function_coverage=1 00:07:45.841 --rc genhtml_legend=1 00:07:45.841 --rc geninfo_all_blocks=1 00:07:45.841 --rc geninfo_unexecuted_blocks=1 00:07:45.841 00:07:45.841 ' 00:07:45.841 07:35:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:45.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.841 --rc genhtml_branch_coverage=1 00:07:45.841 --rc genhtml_function_coverage=1 00:07:45.841 --rc genhtml_legend=1 00:07:45.841 --rc geninfo_all_blocks=1 00:07:45.841 --rc geninfo_unexecuted_blocks=1 00:07:45.841 00:07:45.841 ' 00:07:45.841 07:35:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:45.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.841 --rc genhtml_branch_coverage=1 00:07:45.841 --rc genhtml_function_coverage=1 00:07:45.841 --rc genhtml_legend=1 00:07:45.841 --rc geninfo_all_blocks=1 00:07:45.841 --rc geninfo_unexecuted_blocks=1 00:07:45.841 00:07:45.841 ' 00:07:45.841 07:35:11 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.841 07:35:11 -- nvmf/common.sh@7 -- # uname -s 00:07:45.841 07:35:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.841 07:35:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.841 07:35:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.841 07:35:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.841 07:35:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.841 07:35:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.841 07:35:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.841 07:35:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.841 07:35:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.841 07:35:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.841 07:35:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:07:45.841 07:35:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:07:45.841 07:35:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.841 07:35:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.841 07:35:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.841 07:35:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.841 07:35:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.841 07:35:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.841 07:35:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.841 07:35:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.841 07:35:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.841 07:35:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.841 07:35:11 -- paths/export.sh@5 -- # export PATH 00:07:45.841 07:35:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.841 07:35:11 -- nvmf/common.sh@46 -- # : 0 00:07:45.841 07:35:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:45.841 07:35:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:45.841 07:35:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:45.841 07:35:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.841 07:35:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.841 07:35:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:45.841 07:35:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:45.841 07:35:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:45.841 07:35:11 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.841 07:35:11 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.841 07:35:11 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:45.841 07:35:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:45.841 07:35:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.841 07:35:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:45.841 07:35:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:45.841 07:35:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:45.841 07:35:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.841 07:35:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.841 07:35:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.841 07:35:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:45.841 07:35:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:45.841 07:35:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:45.841 07:35:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:45.841 07:35:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:45.841 07:35:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:45.841 07:35:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.841 07:35:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.841 07:35:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.841 07:35:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:45.841 07:35:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.842 07:35:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.842 07:35:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.842 07:35:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.842 07:35:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.842 07:35:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.842 07:35:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.842 07:35:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.842 07:35:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:45.842 07:35:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:45.842 Cannot find device "nvmf_tgt_br" 00:07:45.842 07:35:11 -- nvmf/common.sh@154 -- # true 00:07:45.842 07:35:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.842 Cannot find device "nvmf_tgt_br2" 00:07:45.842 07:35:11 -- nvmf/common.sh@155 -- # true 00:07:45.842 07:35:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:45.842 07:35:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:45.842 Cannot find device "nvmf_tgt_br" 00:07:45.842 07:35:11 -- nvmf/common.sh@157 -- # true 00:07:45.842 07:35:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:45.842 Cannot find device "nvmf_tgt_br2" 00:07:45.842 07:35:11 -- nvmf/common.sh@158 -- # true 00:07:45.842 07:35:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:45.842 07:35:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:46.101 07:35:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.101 07:35:11 -- nvmf/common.sh@161 -- # true 00:07:46.101 07:35:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.101 07:35:11 -- nvmf/common.sh@162 -- # true 00:07:46.101 07:35:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.101 07:35:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.101 07:35:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.101 07:35:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.101 07:35:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.101 07:35:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.101 07:35:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.101 07:35:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:46.101 07:35:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:46.101 07:35:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:46.101 07:35:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:46.101 07:35:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:46.101 07:35:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:46.101 07:35:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.101 07:35:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.101 07:35:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.101 07:35:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:46.101 07:35:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:46.101 07:35:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.101 07:35:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.101 07:35:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.101 07:35:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.101 07:35:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.101 07:35:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:46.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:07:46.101 00:07:46.101 --- 10.0.0.2 ping statistics --- 00:07:46.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.101 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:46.101 07:35:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:46.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:46.101 00:07:46.101 --- 10.0.0.3 ping statistics --- 00:07:46.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.101 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:46.101 07:35:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:46.101 00:07:46.101 --- 10.0.0.1 ping statistics --- 00:07:46.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.101 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:46.101 07:35:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.101 07:35:11 -- nvmf/common.sh@421 -- # return 0 00:07:46.101 07:35:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:46.101 07:35:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.101 07:35:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:46.101 07:35:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:46.101 07:35:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.101 07:35:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:46.101 07:35:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:46.101 07:35:11 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:46.101 07:35:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:46.101 07:35:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.101 07:35:11 -- common/autotest_common.sh@10 -- # set +x 00:07:46.101 07:35:11 -- nvmf/common.sh@469 -- # nvmfpid=61518 00:07:46.101 07:35:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:46.101 07:35:11 -- nvmf/common.sh@470 -- # waitforlisten 61518 00:07:46.101 07:35:11 -- common/autotest_common.sh@829 -- # '[' -z 61518 ']' 00:07:46.101 07:35:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.101 07:35:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.101 07:35:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.101 07:35:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.101 07:35:11 -- common/autotest_common.sh@10 -- # set +x 00:07:46.361 [2024-12-02 07:35:11.730053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.361 [2024-12-02 07:35:11.730131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.361 [2024-12-02 07:35:11.865115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.361 [2024-12-02 07:35:11.917207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.361 [2024-12-02 07:35:11.917379] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.361 [2024-12-02 07:35:11.917393] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.361 [2024-12-02 07:35:11.917401] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.361 [2024-12-02 07:35:11.917837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.361 [2024-12-02 07:35:11.918389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.361 [2024-12-02 07:35:11.918453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.361 [2024-12-02 07:35:11.918459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.361 07:35:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.361 07:35:11 -- common/autotest_common.sh@862 -- # return 0 00:07:46.361 07:35:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:46.361 07:35:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.361 07:35:11 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 07:35:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:46.621 07:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.621 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 07:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:46.621 07:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.621 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 07:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.621 07:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.621 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 [2024-12-02 07:35:12.062811] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.621 07:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:46.621 07:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.621 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 Malloc0 00:07:46.621 07:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.621 07:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.621 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 07:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.621 07:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.621 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 07:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.621 07:35:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.621 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 [2024-12-02 07:35:12.120697] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.621 07:35:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61550 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@30 -- # READ_PID=61552 00:07:46.621 07:35:12 -- nvmf/common.sh@520 -- # config=() 00:07:46.621 07:35:12 -- nvmf/common.sh@520 -- # local subsystem config 00:07:46.621 07:35:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:46.621 07:35:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:46.621 { 00:07:46.621 "params": { 00:07:46.621 "name": "Nvme$subsystem", 00:07:46.621 "trtype": "$TEST_TRANSPORT", 00:07:46.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.621 "adrfam": "ipv4", 00:07:46.621 "trsvcid": "$NVMF_PORT", 00:07:46.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.621 "hdgst": ${hdgst:-false}, 00:07:46.621 "ddgst": ${ddgst:-false} 00:07:46.621 }, 00:07:46.621 "method": "bdev_nvme_attach_controller" 00:07:46.621 } 00:07:46.621 EOF 00:07:46.621 )") 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:46.621 07:35:12 -- nvmf/common.sh@520 -- # config=() 00:07:46.621 07:35:12 -- nvmf/common.sh@520 -- # local subsystem config 00:07:46.621 07:35:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:46.621 07:35:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:46.621 { 00:07:46.621 "params": { 00:07:46.621 "name": "Nvme$subsystem", 00:07:46.621 "trtype": "$TEST_TRANSPORT", 00:07:46.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.621 "adrfam": "ipv4", 00:07:46.621 "trsvcid": "$NVMF_PORT", 00:07:46.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.621 "hdgst": ${hdgst:-false}, 00:07:46.621 "ddgst": ${ddgst:-false} 00:07:46.621 }, 00:07:46.621 "method": "bdev_nvme_attach_controller" 00:07:46.621 } 00:07:46.621 EOF 00:07:46.621 )") 00:07:46.621 07:35:12 -- nvmf/common.sh@542 -- # cat 00:07:46.621 07:35:12 -- nvmf/common.sh@542 -- # cat 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:46.621 07:35:12 -- nvmf/common.sh@544 -- # jq . 00:07:46.621 07:35:12 -- nvmf/common.sh@544 -- # jq . 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61555 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61562 00:07:46.621 07:35:12 -- nvmf/common.sh@545 -- # IFS=, 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@35 -- # sync 00:07:46.621 07:35:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:46.621 "params": { 00:07:46.621 "name": "Nvme1", 00:07:46.621 "trtype": "tcp", 00:07:46.621 "traddr": "10.0.0.2", 00:07:46.621 "adrfam": "ipv4", 00:07:46.621 "trsvcid": "4420", 00:07:46.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.621 "hdgst": false, 00:07:46.621 "ddgst": false 00:07:46.621 }, 00:07:46.621 "method": "bdev_nvme_attach_controller" 00:07:46.621 }' 00:07:46.621 07:35:12 -- nvmf/common.sh@545 -- # IFS=, 00:07:46.621 07:35:12 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:46.621 07:35:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:46.621 "params": { 00:07:46.621 "name": "Nvme1", 00:07:46.621 "trtype": "tcp", 00:07:46.621 "traddr": "10.0.0.2", 00:07:46.621 "adrfam": "ipv4", 00:07:46.621 "trsvcid": "4420", 00:07:46.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.621 "hdgst": false, 00:07:46.621 "ddgst": false 00:07:46.621 }, 00:07:46.621 "method": "bdev_nvme_attach_controller" 00:07:46.621 }' 00:07:46.621 07:35:12 -- nvmf/common.sh@520 -- # config=() 00:07:46.621 07:35:12 -- nvmf/common.sh@520 -- # local subsystem config 00:07:46.621 07:35:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:46.621 07:35:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:46.622 { 00:07:46.622 "params": { 00:07:46.622 "name": "Nvme$subsystem", 00:07:46.622 "trtype": "$TEST_TRANSPORT", 00:07:46.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.622 "adrfam": "ipv4", 00:07:46.622 "trsvcid": "$NVMF_PORT", 00:07:46.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.622 "hdgst": ${hdgst:-false}, 00:07:46.622 "ddgst": ${ddgst:-false} 00:07:46.622 }, 00:07:46.622 "method": "bdev_nvme_attach_controller" 00:07:46.622 } 00:07:46.622 EOF 00:07:46.622 )") 00:07:46.622 07:35:12 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:46.622 07:35:12 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:46.622 07:35:12 -- nvmf/common.sh@542 -- # cat 00:07:46.622 07:35:12 -- nvmf/common.sh@520 -- # config=() 00:07:46.622 07:35:12 -- nvmf/common.sh@520 -- # local subsystem config 00:07:46.622 07:35:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:46.622 07:35:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:46.622 { 00:07:46.622 "params": { 00:07:46.622 "name": "Nvme$subsystem", 00:07:46.622 "trtype": "$TEST_TRANSPORT", 00:07:46.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.622 "adrfam": "ipv4", 00:07:46.622 "trsvcid": "$NVMF_PORT", 00:07:46.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.622 "hdgst": ${hdgst:-false}, 00:07:46.622 "ddgst": ${ddgst:-false} 00:07:46.622 }, 00:07:46.622 "method": "bdev_nvme_attach_controller" 00:07:46.622 } 00:07:46.622 EOF 00:07:46.622 )") 00:07:46.622 07:35:12 -- nvmf/common.sh@542 -- # cat 00:07:46.622 07:35:12 -- nvmf/common.sh@544 -- # jq . 00:07:46.622 07:35:12 -- nvmf/common.sh@544 -- # jq . 00:07:46.622 07:35:12 -- nvmf/common.sh@545 -- # IFS=, 00:07:46.622 07:35:12 -- nvmf/common.sh@545 -- # IFS=, 00:07:46.622 07:35:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:46.622 "params": { 00:07:46.622 "name": "Nvme1", 00:07:46.622 "trtype": "tcp", 00:07:46.622 "traddr": "10.0.0.2", 00:07:46.622 "adrfam": "ipv4", 00:07:46.622 "trsvcid": "4420", 00:07:46.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.622 "hdgst": false, 00:07:46.622 "ddgst": false 00:07:46.622 }, 00:07:46.622 "method": "bdev_nvme_attach_controller" 00:07:46.622 }' 00:07:46.622 07:35:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:46.622 "params": { 00:07:46.622 "name": "Nvme1", 00:07:46.622 "trtype": "tcp", 00:07:46.622 "traddr": "10.0.0.2", 00:07:46.622 "adrfam": "ipv4", 00:07:46.622 "trsvcid": "4420", 00:07:46.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.622 "hdgst": false, 00:07:46.622 "ddgst": false 00:07:46.622 }, 00:07:46.622 "method": "bdev_nvme_attach_controller" 00:07:46.622 }' 00:07:46.622 [2024-12-02 07:35:12.185101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.622 [2024-12-02 07:35:12.185101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.622 [2024-12-02 07:35:12.185190] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-ty[2024-12-02 07:35:12.185193] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=libpe=auto ] 00:07:46.622 .cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:46.622 [2024-12-02 07:35:12.185628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.622 [2024-12-02 07:35:12.185817] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:46.622 07:35:12 -- target/bdev_io_wait.sh@37 -- # wait 61550 00:07:46.622 [2024-12-02 07:35:12.209280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.622 [2024-12-02 07:35:12.213398] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:46.896 [2024-12-02 07:35:12.367134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.896 [2024-12-02 07:35:12.408817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.896 [2024-12-02 07:35:12.420735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:07:46.896 [2024-12-02 07:35:12.447420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.896 [2024-12-02 07:35:12.462252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:07:46.896 [2024-12-02 07:35:12.491287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.896 [2024-12-02 07:35:12.500429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:07:47.210 Running I/O for 1 seconds... 00:07:47.211 [2024-12-02 07:35:12.543585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.211 Running I/O for 1 seconds... 00:07:47.211 Running I/O for 1 seconds... 00:07:47.211 Running I/O for 1 seconds... 00:07:48.166 00:07:48.166 Latency(us) 00:07:48.166 [2024-12-02T07:35:13.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.166 [2024-12-02T07:35:13.790Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:48.166 Nvme1n1 : 1.02 6014.26 23.49 0.00 0.00 20950.51 9889.98 36700.16 00:07:48.166 [2024-12-02T07:35:13.790Z] =================================================================================================================== 00:07:48.166 [2024-12-02T07:35:13.790Z] Total : 6014.26 23.49 0.00 0.00 20950.51 9889.98 36700.16 00:07:48.166 00:07:48.166 Latency(us) 00:07:48.166 [2024-12-02T07:35:13.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.166 [2024-12-02T07:35:13.790Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:48.166 Nvme1n1 : 1.00 173812.91 678.96 0.00 0.00 733.73 329.54 1139.43 00:07:48.166 [2024-12-02T07:35:13.790Z] =================================================================================================================== 00:07:48.166 [2024-12-02T07:35:13.790Z] Total : 173812.91 678.96 0.00 0.00 733.73 329.54 1139.43 00:07:48.166 00:07:48.166 Latency(us) 00:07:48.166 [2024-12-02T07:35:13.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.166 [2024-12-02T07:35:13.790Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:48.166 Nvme1n1 : 1.01 9703.05 37.90 0.00 0.00 13138.18 7149.38 23831.27 00:07:48.166 [2024-12-02T07:35:13.790Z] =================================================================================================================== 00:07:48.166 [2024-12-02T07:35:13.790Z] Total : 9703.05 37.90 0.00 0.00 13138.18 7149.38 23831.27 00:07:48.166 00:07:48.166 Latency(us) 00:07:48.166 [2024-12-02T07:35:13.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.166 [2024-12-02T07:35:13.790Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:48.166 Nvme1n1 : 1.01 6076.49 23.74 0.00 0.00 20996.51 5630.14 40989.79 00:07:48.166 [2024-12-02T07:35:13.790Z] =================================================================================================================== 00:07:48.166 [2024-12-02T07:35:13.790Z] Total : 6076.49 23.74 0.00 0.00 20996.51 5630.14 40989.79 00:07:48.427 07:35:13 -- target/bdev_io_wait.sh@38 -- # wait 61552 00:07:48.427 07:35:13 -- target/bdev_io_wait.sh@39 -- # wait 61555 00:07:48.427 07:35:13 -- target/bdev_io_wait.sh@40 -- # wait 61562 00:07:48.427 07:35:13 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.427 07:35:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.427 07:35:13 -- common/autotest_common.sh@10 -- # set +x 00:07:48.427 07:35:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.427 07:35:13 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:48.427 07:35:13 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:48.427 07:35:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:48.427 07:35:13 -- nvmf/common.sh@116 -- # sync 00:07:48.427 07:35:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:48.427 07:35:13 -- nvmf/common.sh@119 -- # set +e 00:07:48.427 07:35:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:48.427 07:35:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:48.427 rmmod nvme_tcp 00:07:48.427 rmmod nvme_fabrics 00:07:48.427 rmmod nvme_keyring 00:07:48.427 07:35:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:48.427 07:35:13 -- nvmf/common.sh@123 -- # set -e 00:07:48.427 07:35:13 -- nvmf/common.sh@124 -- # return 0 00:07:48.427 07:35:13 -- nvmf/common.sh@477 -- # '[' -n 61518 ']' 00:07:48.427 07:35:13 -- nvmf/common.sh@478 -- # killprocess 61518 00:07:48.427 07:35:13 -- common/autotest_common.sh@936 -- # '[' -z 61518 ']' 00:07:48.427 07:35:13 -- common/autotest_common.sh@940 -- # kill -0 61518 00:07:48.427 07:35:13 -- common/autotest_common.sh@941 -- # uname 00:07:48.427 07:35:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.427 07:35:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61518 00:07:48.427 killing process with pid 61518 00:07:48.427 07:35:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.427 07:35:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.427 07:35:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61518' 00:07:48.427 07:35:14 -- common/autotest_common.sh@955 -- # kill 61518 00:07:48.427 07:35:14 -- common/autotest_common.sh@960 -- # wait 61518 00:07:48.686 07:35:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:48.686 07:35:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:48.686 07:35:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:48.686 07:35:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.686 07:35:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:48.686 07:35:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.686 07:35:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.686 07:35:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.686 07:35:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:48.686 00:07:48.686 real 0m3.062s 00:07:48.686 user 0m13.382s 00:07:48.686 sys 0m1.880s 00:07:48.686 07:35:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.686 07:35:14 -- common/autotest_common.sh@10 -- # set +x 00:07:48.686 ************************************ 00:07:48.686 END TEST nvmf_bdev_io_wait 00:07:48.686 ************************************ 00:07:48.687 07:35:14 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:48.687 07:35:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:48.687 07:35:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.687 07:35:14 -- common/autotest_common.sh@10 -- # set +x 00:07:48.687 ************************************ 00:07:48.687 START TEST nvmf_queue_depth 00:07:48.687 ************************************ 00:07:48.687 07:35:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:48.687 * Looking for test storage... 00:07:48.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.687 07:35:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:48.687 07:35:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:48.687 07:35:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:48.946 07:35:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:48.946 07:35:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:48.946 07:35:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:48.946 07:35:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:48.946 07:35:14 -- scripts/common.sh@335 -- # IFS=.-: 00:07:48.946 07:35:14 -- scripts/common.sh@335 -- # read -ra ver1 00:07:48.946 07:35:14 -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.946 07:35:14 -- scripts/common.sh@336 -- # read -ra ver2 00:07:48.946 07:35:14 -- scripts/common.sh@337 -- # local 'op=<' 00:07:48.946 07:35:14 -- scripts/common.sh@339 -- # ver1_l=2 00:07:48.947 07:35:14 -- scripts/common.sh@340 -- # ver2_l=1 00:07:48.947 07:35:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:48.947 07:35:14 -- scripts/common.sh@343 -- # case "$op" in 00:07:48.947 07:35:14 -- scripts/common.sh@344 -- # : 1 00:07:48.947 07:35:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:48.947 07:35:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.947 07:35:14 -- scripts/common.sh@364 -- # decimal 1 00:07:48.947 07:35:14 -- scripts/common.sh@352 -- # local d=1 00:07:48.947 07:35:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.947 07:35:14 -- scripts/common.sh@354 -- # echo 1 00:07:48.947 07:35:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:48.947 07:35:14 -- scripts/common.sh@365 -- # decimal 2 00:07:48.947 07:35:14 -- scripts/common.sh@352 -- # local d=2 00:07:48.947 07:35:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.947 07:35:14 -- scripts/common.sh@354 -- # echo 2 00:07:48.947 07:35:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:48.947 07:35:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:48.947 07:35:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:48.947 07:35:14 -- scripts/common.sh@367 -- # return 0 00:07:48.947 07:35:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.947 07:35:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:48.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.947 --rc genhtml_branch_coverage=1 00:07:48.947 --rc genhtml_function_coverage=1 00:07:48.947 --rc genhtml_legend=1 00:07:48.947 --rc geninfo_all_blocks=1 00:07:48.947 --rc geninfo_unexecuted_blocks=1 00:07:48.947 00:07:48.947 ' 00:07:48.947 07:35:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:48.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.947 --rc genhtml_branch_coverage=1 00:07:48.947 --rc genhtml_function_coverage=1 00:07:48.947 --rc genhtml_legend=1 00:07:48.947 --rc geninfo_all_blocks=1 00:07:48.947 --rc geninfo_unexecuted_blocks=1 00:07:48.947 00:07:48.947 ' 00:07:48.947 07:35:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:48.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.947 --rc genhtml_branch_coverage=1 00:07:48.947 --rc genhtml_function_coverage=1 00:07:48.947 --rc genhtml_legend=1 00:07:48.947 --rc geninfo_all_blocks=1 00:07:48.947 --rc geninfo_unexecuted_blocks=1 00:07:48.947 00:07:48.947 ' 00:07:48.947 07:35:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:48.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.947 --rc genhtml_branch_coverage=1 00:07:48.947 --rc genhtml_function_coverage=1 00:07:48.947 --rc genhtml_legend=1 00:07:48.947 --rc geninfo_all_blocks=1 00:07:48.947 --rc geninfo_unexecuted_blocks=1 00:07:48.947 00:07:48.947 ' 00:07:48.947 07:35:14 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.947 07:35:14 -- nvmf/common.sh@7 -- # uname -s 00:07:48.947 07:35:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.947 07:35:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.947 07:35:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.947 07:35:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.947 07:35:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.947 07:35:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.947 07:35:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.947 07:35:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.947 07:35:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.947 07:35:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.947 07:35:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:07:48.947 07:35:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:07:48.947 07:35:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.947 07:35:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.947 07:35:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.947 07:35:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.947 07:35:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.947 07:35:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.947 07:35:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.947 07:35:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.947 07:35:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.947 07:35:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.947 07:35:14 -- paths/export.sh@5 -- # export PATH 00:07:48.947 07:35:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.947 07:35:14 -- nvmf/common.sh@46 -- # : 0 00:07:48.947 07:35:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:48.947 07:35:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:48.947 07:35:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:48.947 07:35:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.947 07:35:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.947 07:35:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:48.947 07:35:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:48.947 07:35:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:48.947 07:35:14 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:48.947 07:35:14 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:48.947 07:35:14 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:48.947 07:35:14 -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:48.947 07:35:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:48.947 07:35:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.947 07:35:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:48.947 07:35:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:48.947 07:35:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:48.947 07:35:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.947 07:35:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.947 07:35:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.947 07:35:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:48.947 07:35:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:48.947 07:35:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:48.947 07:35:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:48.947 07:35:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:48.948 07:35:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:48.948 07:35:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.948 07:35:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.948 07:35:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:48.948 07:35:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:48.948 07:35:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:48.948 07:35:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:48.948 07:35:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:48.948 07:35:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.948 07:35:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:48.948 07:35:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:48.948 07:35:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:48.948 07:35:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:48.948 07:35:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:48.948 07:35:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:48.948 Cannot find device "nvmf_tgt_br" 00:07:48.948 07:35:14 -- nvmf/common.sh@154 -- # true 00:07:48.948 07:35:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.948 Cannot find device "nvmf_tgt_br2" 00:07:48.948 07:35:14 -- nvmf/common.sh@155 -- # true 00:07:48.948 07:35:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:48.948 07:35:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:48.948 Cannot find device "nvmf_tgt_br" 00:07:48.948 07:35:14 -- nvmf/common.sh@157 -- # true 00:07:48.948 07:35:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:48.948 Cannot find device "nvmf_tgt_br2" 00:07:48.948 07:35:14 -- nvmf/common.sh@158 -- # true 00:07:48.948 07:35:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:48.948 07:35:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:48.948 07:35:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.948 07:35:14 -- nvmf/common.sh@161 -- # true 00:07:48.948 07:35:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.948 07:35:14 -- nvmf/common.sh@162 -- # true 00:07:48.948 07:35:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:48.948 07:35:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:48.948 07:35:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:48.948 07:35:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:48.948 07:35:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:49.208 07:35:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:49.208 07:35:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:49.208 07:35:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:49.208 07:35:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:49.208 07:35:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:49.208 07:35:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:49.208 07:35:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:49.208 07:35:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:49.208 07:35:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:49.208 07:35:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:49.208 07:35:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:49.208 07:35:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:49.208 07:35:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:49.208 07:35:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:49.208 07:35:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:49.208 07:35:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:49.208 07:35:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:49.208 07:35:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:49.208 07:35:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:49.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:07:49.208 00:07:49.208 --- 10.0.0.2 ping statistics --- 00:07:49.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.208 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:49.208 07:35:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:49.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:49.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:07:49.208 00:07:49.208 --- 10.0.0.3 ping statistics --- 00:07:49.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.208 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:49.208 07:35:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:49.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:49.208 00:07:49.208 --- 10.0.0.1 ping statistics --- 00:07:49.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.208 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:49.208 07:35:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.208 07:35:14 -- nvmf/common.sh@421 -- # return 0 00:07:49.208 07:35:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:49.208 07:35:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.208 07:35:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:49.208 07:35:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:49.208 07:35:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.208 07:35:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:49.208 07:35:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:49.208 07:35:14 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:49.208 07:35:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:49.208 07:35:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.208 07:35:14 -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 07:35:14 -- nvmf/common.sh@469 -- # nvmfpid=61764 00:07:49.208 07:35:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:49.208 07:35:14 -- nvmf/common.sh@470 -- # waitforlisten 61764 00:07:49.208 07:35:14 -- common/autotest_common.sh@829 -- # '[' -z 61764 ']' 00:07:49.208 07:35:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.208 07:35:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.208 07:35:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.208 07:35:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.208 07:35:14 -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 [2024-12-02 07:35:14.784690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.208 [2024-12-02 07:35:14.784786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.471 [2024-12-02 07:35:14.915001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.471 [2024-12-02 07:35:14.964025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.471 [2024-12-02 07:35:14.964142] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.471 [2024-12-02 07:35:14.964154] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.471 [2024-12-02 07:35:14.964162] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.471 [2024-12-02 07:35:14.964193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.409 07:35:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.409 07:35:15 -- common/autotest_common.sh@862 -- # return 0 00:07:50.409 07:35:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:50.409 07:35:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.409 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:50.409 07:35:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.409 07:35:15 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.409 07:35:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.409 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:50.409 [2024-12-02 07:35:15.762264] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.409 07:35:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.409 07:35:15 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:50.409 07:35:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.409 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:50.409 Malloc0 00:07:50.409 07:35:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.409 07:35:15 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.409 07:35:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.409 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:50.409 07:35:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.409 07:35:15 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.409 07:35:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.409 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:50.409 07:35:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.409 07:35:15 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.409 07:35:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.409 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:50.409 [2024-12-02 07:35:15.809658] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.409 07:35:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.409 07:35:15 -- target/queue_depth.sh@30 -- # bdevperf_pid=61796 00:07:50.409 07:35:15 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.409 07:35:15 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:50.409 07:35:15 -- target/queue_depth.sh@33 -- # waitforlisten 61796 /var/tmp/bdevperf.sock 00:07:50.409 07:35:15 -- common/autotest_common.sh@829 -- # '[' -z 61796 ']' 00:07:50.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.409 07:35:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.409 07:35:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.409 07:35:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.409 07:35:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.409 07:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:50.409 [2024-12-02 07:35:15.856527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.409 [2024-12-02 07:35:15.856623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61796 ] 00:07:50.409 [2024-12-02 07:35:15.991428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.669 [2024-12-02 07:35:16.045560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.669 07:35:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.669 07:35:16 -- common/autotest_common.sh@862 -- # return 0 00:07:50.669 07:35:16 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:50.669 07:35:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.669 07:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:50.669 NVMe0n1 00:07:50.669 07:35:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.669 07:35:16 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.669 Running I/O for 10 seconds... 00:08:02.883 00:08:02.883 Latency(us) 00:08:02.883 [2024-12-02T07:35:28.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.883 [2024-12-02T07:35:28.507Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:02.883 Verification LBA range: start 0x0 length 0x4000 00:08:02.883 NVMe0n1 : 10.05 17090.66 66.76 0.00 0.00 59713.78 11856.06 52667.11 00:08:02.883 [2024-12-02T07:35:28.507Z] =================================================================================================================== 00:08:02.883 [2024-12-02T07:35:28.507Z] Total : 17090.66 66.76 0.00 0.00 59713.78 11856.06 52667.11 00:08:02.883 0 00:08:02.883 07:35:26 -- target/queue_depth.sh@39 -- # killprocess 61796 00:08:02.883 07:35:26 -- common/autotest_common.sh@936 -- # '[' -z 61796 ']' 00:08:02.883 07:35:26 -- common/autotest_common.sh@940 -- # kill -0 61796 00:08:02.883 07:35:26 -- common/autotest_common.sh@941 -- # uname 00:08:02.883 07:35:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:02.883 07:35:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61796 00:08:02.883 killing process with pid 61796 00:08:02.883 Received shutdown signal, test time was about 10.000000 seconds 00:08:02.883 00:08:02.883 Latency(us) 00:08:02.883 [2024-12-02T07:35:28.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.883 [2024-12-02T07:35:28.507Z] =================================================================================================================== 00:08:02.883 [2024-12-02T07:35:28.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:02.883 07:35:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:02.883 07:35:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:02.883 07:35:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61796' 00:08:02.883 07:35:26 -- common/autotest_common.sh@955 -- # kill 61796 00:08:02.883 07:35:26 -- common/autotest_common.sh@960 -- # wait 61796 00:08:02.883 07:35:26 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:02.883 07:35:26 -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:02.883 07:35:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:02.884 07:35:26 -- nvmf/common.sh@116 -- # sync 00:08:02.884 07:35:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:02.884 07:35:26 -- nvmf/common.sh@119 -- # set +e 00:08:02.884 07:35:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:02.884 07:35:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:02.884 rmmod nvme_tcp 00:08:02.884 rmmod nvme_fabrics 00:08:02.884 rmmod nvme_keyring 00:08:02.884 07:35:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:02.884 07:35:26 -- nvmf/common.sh@123 -- # set -e 00:08:02.884 07:35:26 -- nvmf/common.sh@124 -- # return 0 00:08:02.884 07:35:26 -- nvmf/common.sh@477 -- # '[' -n 61764 ']' 00:08:02.884 07:35:26 -- nvmf/common.sh@478 -- # killprocess 61764 00:08:02.884 07:35:26 -- common/autotest_common.sh@936 -- # '[' -z 61764 ']' 00:08:02.884 07:35:26 -- common/autotest_common.sh@940 -- # kill -0 61764 00:08:02.884 07:35:26 -- common/autotest_common.sh@941 -- # uname 00:08:02.884 07:35:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:02.884 07:35:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61764 00:08:02.884 07:35:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:02.884 07:35:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:02.884 killing process with pid 61764 00:08:02.884 07:35:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61764' 00:08:02.884 07:35:26 -- common/autotest_common.sh@955 -- # kill 61764 00:08:02.884 07:35:26 -- common/autotest_common.sh@960 -- # wait 61764 00:08:02.884 07:35:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:02.884 07:35:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:02.884 07:35:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:02.884 07:35:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.884 07:35:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:02.884 07:35:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.884 07:35:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.884 07:35:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.884 07:35:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:02.884 00:08:02.884 real 0m12.632s 00:08:02.884 user 0m21.820s 00:08:02.884 sys 0m1.793s 00:08:02.884 07:35:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.884 07:35:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.884 ************************************ 00:08:02.884 END TEST nvmf_queue_depth 00:08:02.884 ************************************ 00:08:02.884 07:35:26 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:02.884 07:35:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.884 07:35:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.884 07:35:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.884 ************************************ 00:08:02.884 START TEST nvmf_multipath 00:08:02.884 ************************************ 00:08:02.884 07:35:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:02.884 * Looking for test storage... 00:08:02.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:02.884 07:35:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:02.884 07:35:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:02.884 07:35:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:02.884 07:35:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:02.884 07:35:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:02.884 07:35:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:02.884 07:35:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:02.884 07:35:27 -- scripts/common.sh@335 -- # IFS=.-: 00:08:02.884 07:35:27 -- scripts/common.sh@335 -- # read -ra ver1 00:08:02.884 07:35:27 -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.884 07:35:27 -- scripts/common.sh@336 -- # read -ra ver2 00:08:02.884 07:35:27 -- scripts/common.sh@337 -- # local 'op=<' 00:08:02.884 07:35:27 -- scripts/common.sh@339 -- # ver1_l=2 00:08:02.884 07:35:27 -- scripts/common.sh@340 -- # ver2_l=1 00:08:02.884 07:35:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:02.884 07:35:27 -- scripts/common.sh@343 -- # case "$op" in 00:08:02.884 07:35:27 -- scripts/common.sh@344 -- # : 1 00:08:02.884 07:35:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:02.884 07:35:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.884 07:35:27 -- scripts/common.sh@364 -- # decimal 1 00:08:02.884 07:35:27 -- scripts/common.sh@352 -- # local d=1 00:08:02.884 07:35:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.884 07:35:27 -- scripts/common.sh@354 -- # echo 1 00:08:02.884 07:35:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:02.884 07:35:27 -- scripts/common.sh@365 -- # decimal 2 00:08:02.884 07:35:27 -- scripts/common.sh@352 -- # local d=2 00:08:02.884 07:35:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.884 07:35:27 -- scripts/common.sh@354 -- # echo 2 00:08:02.884 07:35:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:02.884 07:35:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:02.884 07:35:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:02.884 07:35:27 -- scripts/common.sh@367 -- # return 0 00:08:02.884 07:35:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.884 07:35:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:02.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.884 --rc genhtml_branch_coverage=1 00:08:02.884 --rc genhtml_function_coverage=1 00:08:02.884 --rc genhtml_legend=1 00:08:02.884 --rc geninfo_all_blocks=1 00:08:02.884 --rc geninfo_unexecuted_blocks=1 00:08:02.884 00:08:02.884 ' 00:08:02.884 07:35:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:02.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.884 --rc genhtml_branch_coverage=1 00:08:02.884 --rc genhtml_function_coverage=1 00:08:02.884 --rc genhtml_legend=1 00:08:02.884 --rc geninfo_all_blocks=1 00:08:02.884 --rc geninfo_unexecuted_blocks=1 00:08:02.884 00:08:02.884 ' 00:08:02.884 07:35:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:02.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.884 --rc genhtml_branch_coverage=1 00:08:02.884 --rc genhtml_function_coverage=1 00:08:02.884 --rc genhtml_legend=1 00:08:02.884 --rc geninfo_all_blocks=1 00:08:02.884 --rc geninfo_unexecuted_blocks=1 00:08:02.884 00:08:02.884 ' 00:08:02.884 07:35:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:02.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.884 --rc genhtml_branch_coverage=1 00:08:02.884 --rc genhtml_function_coverage=1 00:08:02.884 --rc genhtml_legend=1 00:08:02.884 --rc geninfo_all_blocks=1 00:08:02.884 --rc geninfo_unexecuted_blocks=1 00:08:02.884 00:08:02.884 ' 00:08:02.884 07:35:27 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.884 07:35:27 -- nvmf/common.sh@7 -- # uname -s 00:08:02.884 07:35:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.884 07:35:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.884 07:35:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.884 07:35:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.884 07:35:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.884 07:35:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.884 07:35:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.884 07:35:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.884 07:35:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.884 07:35:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.884 07:35:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:02.884 07:35:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:02.884 07:35:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.884 07:35:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.884 07:35:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:02.884 07:35:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.884 07:35:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.884 07:35:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.884 07:35:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.884 07:35:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.884 07:35:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.884 07:35:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.884 07:35:27 -- paths/export.sh@5 -- # export PATH 00:08:02.884 07:35:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.884 07:35:27 -- nvmf/common.sh@46 -- # : 0 00:08:02.884 07:35:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:02.885 07:35:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:02.885 07:35:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:02.885 07:35:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.885 07:35:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.885 07:35:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:02.885 07:35:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:02.885 07:35:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:02.885 07:35:27 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.885 07:35:27 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.885 07:35:27 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:02.885 07:35:27 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.885 07:35:27 -- target/multipath.sh@43 -- # nvmftestinit 00:08:02.885 07:35:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:02.885 07:35:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.885 07:35:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:02.885 07:35:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:02.885 07:35:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:02.885 07:35:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.885 07:35:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.885 07:35:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.885 07:35:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:02.885 07:35:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:02.885 07:35:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:02.885 07:35:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:02.885 07:35:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:02.885 07:35:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:02.885 07:35:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.885 07:35:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.885 07:35:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:02.885 07:35:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:02.885 07:35:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:02.885 07:35:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:02.885 07:35:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:02.885 07:35:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.885 07:35:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:02.885 07:35:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:02.885 07:35:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:02.885 07:35:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:02.885 07:35:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:02.885 07:35:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:02.885 Cannot find device "nvmf_tgt_br" 00:08:02.885 07:35:27 -- nvmf/common.sh@154 -- # true 00:08:02.885 07:35:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.885 Cannot find device "nvmf_tgt_br2" 00:08:02.885 07:35:27 -- nvmf/common.sh@155 -- # true 00:08:02.885 07:35:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:02.885 07:35:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:02.885 Cannot find device "nvmf_tgt_br" 00:08:02.885 07:35:27 -- nvmf/common.sh@157 -- # true 00:08:02.885 07:35:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:02.885 Cannot find device "nvmf_tgt_br2" 00:08:02.885 07:35:27 -- nvmf/common.sh@158 -- # true 00:08:02.885 07:35:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:02.885 07:35:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:02.885 07:35:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:02.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.885 07:35:27 -- nvmf/common.sh@161 -- # true 00:08:02.885 07:35:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:02.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.885 07:35:27 -- nvmf/common.sh@162 -- # true 00:08:02.885 07:35:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:02.885 07:35:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:02.885 07:35:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:02.885 07:35:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:02.885 07:35:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:02.885 07:35:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:02.885 07:35:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:02.885 07:35:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:02.885 07:35:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:02.885 07:35:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:02.885 07:35:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:02.885 07:35:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:02.885 07:35:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:02.885 07:35:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:02.885 07:35:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:02.885 07:35:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:02.885 07:35:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:02.885 07:35:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:02.885 07:35:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:02.885 07:35:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:02.885 07:35:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:02.885 07:35:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:02.885 07:35:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:02.885 07:35:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:02.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:02.885 00:08:02.885 --- 10.0.0.2 ping statistics --- 00:08:02.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.885 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:02.885 07:35:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:02.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:02.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:02.885 00:08:02.885 --- 10.0.0.3 ping statistics --- 00:08:02.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.885 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:02.885 07:35:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:02.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:02.885 00:08:02.885 --- 10.0.0.1 ping statistics --- 00:08:02.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.885 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:02.885 07:35:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.885 07:35:27 -- nvmf/common.sh@421 -- # return 0 00:08:02.885 07:35:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:02.885 07:35:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.885 07:35:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:02.885 07:35:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:02.885 07:35:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.885 07:35:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:02.885 07:35:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:02.885 07:35:27 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:02.885 07:35:27 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:02.885 07:35:27 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:02.885 07:35:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:02.885 07:35:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.885 07:35:27 -- common/autotest_common.sh@10 -- # set +x 00:08:02.885 07:35:27 -- nvmf/common.sh@469 -- # nvmfpid=62118 00:08:02.885 07:35:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.885 07:35:27 -- nvmf/common.sh@470 -- # waitforlisten 62118 00:08:02.885 07:35:27 -- common/autotest_common.sh@829 -- # '[' -z 62118 ']' 00:08:02.885 07:35:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.885 07:35:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.885 07:35:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.885 07:35:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.885 07:35:27 -- common/autotest_common.sh@10 -- # set +x 00:08:02.885 [2024-12-02 07:35:27.500543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.885 [2024-12-02 07:35:27.501108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.885 [2024-12-02 07:35:27.642633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.885 [2024-12-02 07:35:27.713992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:02.885 [2024-12-02 07:35:27.714165] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.885 [2024-12-02 07:35:27.714194] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.885 [2024-12-02 07:35:27.714205] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.885 [2024-12-02 07:35:27.714327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.885 [2024-12-02 07:35:27.714906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.885 [2024-12-02 07:35:27.715102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.885 [2024-12-02 07:35:27.715117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.885 07:35:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.885 07:35:28 -- common/autotest_common.sh@862 -- # return 0 00:08:02.885 07:35:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:02.885 07:35:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.885 07:35:28 -- common/autotest_common.sh@10 -- # set +x 00:08:03.143 07:35:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.143 07:35:28 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.143 [2024-12-02 07:35:28.716459] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.143 07:35:28 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:03.401 Malloc0 00:08:03.660 07:35:29 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:03.918 07:35:29 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.918 07:35:29 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.177 [2024-12-02 07:35:29.717790] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.177 07:35:29 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:04.436 [2024-12-02 07:35:29.929968] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:04.436 07:35:29 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:04.694 07:35:30 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:04.694 07:35:30 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.694 07:35:30 -- common/autotest_common.sh@1187 -- # local i=0 00:08:04.694 07:35:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.694 07:35:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:04.694 07:35:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:07.228 07:35:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:07.228 07:35:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:07.228 07:35:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.228 07:35:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:07.228 07:35:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.228 07:35:32 -- common/autotest_common.sh@1197 -- # return 0 00:08:07.228 07:35:32 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:07.228 07:35:32 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:07.228 07:35:32 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:07.228 07:35:32 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:07.228 07:35:32 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:07.228 07:35:32 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:07.228 07:35:32 -- target/multipath.sh@38 -- # return 0 00:08:07.228 07:35:32 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:07.228 07:35:32 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:07.228 07:35:32 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:07.228 07:35:32 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:07.228 07:35:32 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:07.228 07:35:32 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:07.228 07:35:32 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:07.228 07:35:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:07.228 07:35:32 -- target/multipath.sh@22 -- # local timeout=20 00:08:07.228 07:35:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:07.228 07:35:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:07.228 07:35:32 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:07.228 07:35:32 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:07.228 07:35:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:07.228 07:35:32 -- target/multipath.sh@22 -- # local timeout=20 00:08:07.228 07:35:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:07.228 07:35:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:07.228 07:35:32 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:07.228 07:35:32 -- target/multipath.sh@85 -- # echo numa 00:08:07.228 07:35:32 -- target/multipath.sh@88 -- # fio_pid=62208 00:08:07.228 07:35:32 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:07.228 07:35:32 -- target/multipath.sh@90 -- # sleep 1 00:08:07.228 [global] 00:08:07.228 thread=1 00:08:07.228 invalidate=1 00:08:07.228 rw=randrw 00:08:07.228 time_based=1 00:08:07.228 runtime=6 00:08:07.228 ioengine=libaio 00:08:07.228 direct=1 00:08:07.228 bs=4096 00:08:07.228 iodepth=128 00:08:07.228 norandommap=0 00:08:07.228 numjobs=1 00:08:07.228 00:08:07.228 verify_dump=1 00:08:07.228 verify_backlog=512 00:08:07.228 verify_state_save=0 00:08:07.228 do_verify=1 00:08:07.228 verify=crc32c-intel 00:08:07.228 [job0] 00:08:07.228 filename=/dev/nvme0n1 00:08:07.228 Could not set queue depth (nvme0n1) 00:08:07.228 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:07.228 fio-3.35 00:08:07.228 Starting 1 thread 00:08:07.796 07:35:33 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:08.055 07:35:33 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:08.314 07:35:33 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:08.314 07:35:33 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:08.314 07:35:33 -- target/multipath.sh@22 -- # local timeout=20 00:08:08.314 07:35:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:08.314 07:35:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:08.314 07:35:33 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:08.314 07:35:33 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:08.314 07:35:33 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:08.314 07:35:33 -- target/multipath.sh@22 -- # local timeout=20 00:08:08.314 07:35:33 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:08.314 07:35:33 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:08.314 07:35:33 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:08.314 07:35:33 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:08.573 07:35:34 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:08.831 07:35:34 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:08.831 07:35:34 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:08.831 07:35:34 -- target/multipath.sh@22 -- # local timeout=20 00:08:08.831 07:35:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:08.831 07:35:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:08.831 07:35:34 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:08.831 07:35:34 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:08.831 07:35:34 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:08.831 07:35:34 -- target/multipath.sh@22 -- # local timeout=20 00:08:08.831 07:35:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:08.831 07:35:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:08.832 07:35:34 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:08.832 07:35:34 -- target/multipath.sh@104 -- # wait 62208 00:08:13.020 00:08:13.020 job0: (groupid=0, jobs=1): err= 0: pid=62229: Mon Dec 2 07:35:38 2024 00:08:13.020 read: IOPS=12.2k, BW=47.7MiB/s (50.1MB/s)(287MiB/6002msec) 00:08:13.020 slat (usec): min=7, max=9231, avg=47.44, stdev=201.25 00:08:13.020 clat (usec): min=1267, max=16645, avg=7109.93, stdev=1272.05 00:08:13.020 lat (usec): min=1276, max=17085, avg=7157.37, stdev=1275.99 00:08:13.020 clat percentiles (usec): 00:08:13.020 | 1.00th=[ 3720], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 6390], 00:08:13.020 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7242], 00:08:13.020 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8225], 95.00th=[10028], 00:08:13.020 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12125], 99.95th=[12256], 00:08:13.020 | 99.99th=[12649] 00:08:13.020 bw ( KiB/s): min=13832, max=33512, per=52.76%, avg=25794.18, stdev=6245.01, samples=11 00:08:13.020 iops : min= 3458, max= 8378, avg=6448.55, stdev=1561.25, samples=11 00:08:13.020 write: IOPS=7204, BW=28.1MiB/s (29.5MB/s)(149MiB/5293msec); 0 zone resets 00:08:13.020 slat (usec): min=14, max=3829, avg=57.16, stdev=133.01 00:08:13.020 clat (usec): min=1483, max=12343, avg=6286.55, stdev=1096.25 00:08:13.020 lat (usec): min=2144, max=12384, avg=6343.70, stdev=1100.79 00:08:13.020 clat percentiles (usec): 00:08:13.020 | 1.00th=[ 2966], 5.00th=[ 3818], 10.00th=[ 5014], 20.00th=[ 5800], 00:08:13.020 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6587], 00:08:13.020 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7504], 00:08:13.020 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[10814], 99.95th=[11207], 00:08:13.020 | 99.99th=[11731] 00:08:13.020 bw ( KiB/s): min=14360, max=32928, per=89.48%, avg=25786.91, stdev=5814.63, samples=11 00:08:13.020 iops : min= 3590, max= 8232, avg=6446.73, stdev=1453.66, samples=11 00:08:13.020 lat (msec) : 2=0.04%, 4=3.09%, 10=93.37%, 20=3.50% 00:08:13.020 cpu : usr=6.03%, sys=23.46%, ctx=6437, majf=0, minf=102 00:08:13.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:13.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:13.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:13.020 issued rwts: total=73353,38135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:13.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:13.020 00:08:13.020 Run status group 0 (all jobs): 00:08:13.020 READ: bw=47.7MiB/s (50.1MB/s), 47.7MiB/s-47.7MiB/s (50.1MB/s-50.1MB/s), io=287MiB (300MB), run=6002-6002msec 00:08:13.020 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=149MiB (156MB), run=5293-5293msec 00:08:13.020 00:08:13.020 Disk stats (read/write): 00:08:13.020 nvme0n1: ios=71750/38135, merge=0/0, ticks=485241/222955, in_queue=708196, util=98.55% 00:08:13.020 07:35:38 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:13.279 07:35:38 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:13.538 07:35:39 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:13.538 07:35:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:13.538 07:35:39 -- target/multipath.sh@22 -- # local timeout=20 00:08:13.538 07:35:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:13.538 07:35:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:13.538 07:35:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:13.538 07:35:39 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:13.538 07:35:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:13.538 07:35:39 -- target/multipath.sh@22 -- # local timeout=20 00:08:13.538 07:35:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:13.538 07:35:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:13.538 07:35:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:13.538 07:35:39 -- target/multipath.sh@113 -- # echo round-robin 00:08:13.538 07:35:39 -- target/multipath.sh@116 -- # fio_pid=62312 00:08:13.538 07:35:39 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:13.538 07:35:39 -- target/multipath.sh@118 -- # sleep 1 00:08:13.797 [global] 00:08:13.797 thread=1 00:08:13.797 invalidate=1 00:08:13.797 rw=randrw 00:08:13.797 time_based=1 00:08:13.797 runtime=6 00:08:13.797 ioengine=libaio 00:08:13.797 direct=1 00:08:13.797 bs=4096 00:08:13.797 iodepth=128 00:08:13.797 norandommap=0 00:08:13.797 numjobs=1 00:08:13.797 00:08:13.797 verify_dump=1 00:08:13.797 verify_backlog=512 00:08:13.797 verify_state_save=0 00:08:13.797 do_verify=1 00:08:13.797 verify=crc32c-intel 00:08:13.797 [job0] 00:08:13.797 filename=/dev/nvme0n1 00:08:13.797 Could not set queue depth (nvme0n1) 00:08:13.797 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:13.797 fio-3.35 00:08:13.797 Starting 1 thread 00:08:14.734 07:35:40 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:14.993 07:35:40 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:15.252 07:35:40 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:15.252 07:35:40 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:15.252 07:35:40 -- target/multipath.sh@22 -- # local timeout=20 00:08:15.252 07:35:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:15.252 07:35:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:15.252 07:35:40 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:15.252 07:35:40 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:15.252 07:35:40 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:15.252 07:35:40 -- target/multipath.sh@22 -- # local timeout=20 00:08:15.252 07:35:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:15.252 07:35:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:15.252 07:35:40 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:15.252 07:35:40 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:15.512 07:35:40 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:15.771 07:35:41 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:15.771 07:35:41 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:15.771 07:35:41 -- target/multipath.sh@22 -- # local timeout=20 00:08:15.771 07:35:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:15.771 07:35:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:15.771 07:35:41 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:15.771 07:35:41 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:15.771 07:35:41 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:15.771 07:35:41 -- target/multipath.sh@22 -- # local timeout=20 00:08:15.771 07:35:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:15.771 07:35:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:15.771 07:35:41 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:15.771 07:35:41 -- target/multipath.sh@132 -- # wait 62312 00:08:19.964 00:08:19.964 job0: (groupid=0, jobs=1): err= 0: pid=62333: Mon Dec 2 07:35:45 2024 00:08:19.964 read: IOPS=13.3k, BW=51.9MiB/s (54.4MB/s)(311MiB/6002msec) 00:08:19.964 slat (usec): min=5, max=5604, avg=38.83, stdev=172.16 00:08:19.964 clat (usec): min=236, max=13439, avg=6709.99, stdev=1533.94 00:08:19.964 lat (usec): min=265, max=13450, avg=6748.83, stdev=1546.74 00:08:19.964 clat percentiles (usec): 00:08:19.964 | 1.00th=[ 2671], 5.00th=[ 3949], 10.00th=[ 4621], 20.00th=[ 5538], 00:08:19.964 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7111], 00:08:19.964 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8160], 95.00th=[ 8848], 00:08:19.964 | 99.00th=[11076], 99.50th=[11469], 99.90th=[12256], 99.95th=[12518], 00:08:19.964 | 99.99th=[13173] 00:08:19.964 bw ( KiB/s): min=17280, max=41512, per=53.97%, avg=28668.18, stdev=8195.35, samples=11 00:08:19.964 iops : min= 4320, max=10378, avg=7167.00, stdev=2048.77, samples=11 00:08:19.964 write: IOPS=8145, BW=31.8MiB/s (33.4MB/s)(156MiB/4902msec); 0 zone resets 00:08:19.964 slat (usec): min=11, max=1349, avg=48.17, stdev=117.17 00:08:19.964 clat (usec): min=484, max=12253, avg=5684.43, stdev=1585.48 00:08:19.964 lat (usec): min=517, max=12270, avg=5732.60, stdev=1599.21 00:08:19.964 clat percentiles (usec): 00:08:19.964 | 1.00th=[ 2180], 5.00th=[ 2966], 10.00th=[ 3392], 20.00th=[ 3982], 00:08:19.964 | 30.00th=[ 4621], 40.00th=[ 5735], 50.00th=[ 6194], 60.00th=[ 6521], 00:08:19.964 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7570], 00:08:19.964 | 99.00th=[ 8979], 99.50th=[10028], 99.90th=[11207], 99.95th=[11600], 00:08:19.964 | 99.99th=[12125] 00:08:19.964 bw ( KiB/s): min=17784, max=42016, per=87.96%, avg=28661.09, stdev=7994.84, samples=11 00:08:19.964 iops : min= 4446, max=10504, avg=7165.27, stdev=1998.71, samples=11 00:08:19.964 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.04% 00:08:19.964 lat (msec) : 2=0.53%, 4=9.66%, 10=87.54%, 20=2.20% 00:08:19.964 cpu : usr=5.90%, sys=24.00%, ctx=6723, majf=0, minf=151 00:08:19.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:19.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:19.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:19.964 issued rwts: total=79710,39931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:19.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:19.964 00:08:19.964 Run status group 0 (all jobs): 00:08:19.964 READ: bw=51.9MiB/s (54.4MB/s), 51.9MiB/s-51.9MiB/s (54.4MB/s-54.4MB/s), io=311MiB (326MB), run=6002-6002msec 00:08:19.964 WRITE: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=156MiB (164MB), run=4902-4902msec 00:08:19.964 00:08:19.964 Disk stats (read/write): 00:08:19.964 nvme0n1: ios=78619/39419, merge=0/0, ticks=500878/207959, in_queue=708837, util=98.65% 00:08:19.964 07:35:45 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:19.964 07:35:45 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.964 07:35:45 -- common/autotest_common.sh@1208 -- # local i=0 00:08:19.964 07:35:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:19.964 07:35:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.964 07:35:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:19.964 07:35:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.964 07:35:45 -- common/autotest_common.sh@1220 -- # return 0 00:08:19.964 07:35:45 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.222 07:35:45 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:20.222 07:35:45 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:20.222 07:35:45 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:20.222 07:35:45 -- target/multipath.sh@144 -- # nvmftestfini 00:08:20.222 07:35:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:20.222 07:35:45 -- nvmf/common.sh@116 -- # sync 00:08:20.480 07:35:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:20.480 07:35:45 -- nvmf/common.sh@119 -- # set +e 00:08:20.480 07:35:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:20.480 07:35:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:20.480 rmmod nvme_tcp 00:08:20.480 rmmod nvme_fabrics 00:08:20.480 rmmod nvme_keyring 00:08:20.480 07:35:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:20.480 07:35:45 -- nvmf/common.sh@123 -- # set -e 00:08:20.480 07:35:45 -- nvmf/common.sh@124 -- # return 0 00:08:20.480 07:35:45 -- nvmf/common.sh@477 -- # '[' -n 62118 ']' 00:08:20.480 07:35:45 -- nvmf/common.sh@478 -- # killprocess 62118 00:08:20.480 07:35:45 -- common/autotest_common.sh@936 -- # '[' -z 62118 ']' 00:08:20.480 07:35:45 -- common/autotest_common.sh@940 -- # kill -0 62118 00:08:20.480 07:35:45 -- common/autotest_common.sh@941 -- # uname 00:08:20.480 07:35:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:20.480 07:35:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62118 00:08:20.480 killing process with pid 62118 00:08:20.480 07:35:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:20.480 07:35:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:20.480 07:35:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62118' 00:08:20.480 07:35:45 -- common/autotest_common.sh@955 -- # kill 62118 00:08:20.480 07:35:45 -- common/autotest_common.sh@960 -- # wait 62118 00:08:20.740 07:35:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:20.740 07:35:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:20.740 07:35:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:20.740 07:35:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.740 07:35:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:20.740 07:35:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.740 07:35:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.740 07:35:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.740 07:35:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:20.740 00:08:20.740 real 0m19.222s 00:08:20.740 user 1m11.925s 00:08:20.740 sys 0m9.916s 00:08:20.740 07:35:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.740 07:35:46 -- common/autotest_common.sh@10 -- # set +x 00:08:20.740 ************************************ 00:08:20.740 END TEST nvmf_multipath 00:08:20.740 ************************************ 00:08:20.740 07:35:46 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:20.740 07:35:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:20.740 07:35:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.740 07:35:46 -- common/autotest_common.sh@10 -- # set +x 00:08:20.740 ************************************ 00:08:20.740 START TEST nvmf_zcopy 00:08:20.740 ************************************ 00:08:20.740 07:35:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:20.740 * Looking for test storage... 00:08:20.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:20.741 07:35:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:20.741 07:35:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:20.741 07:35:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:20.741 07:35:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:20.741 07:35:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:20.741 07:35:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:20.741 07:35:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:20.741 07:35:46 -- scripts/common.sh@335 -- # IFS=.-: 00:08:20.741 07:35:46 -- scripts/common.sh@335 -- # read -ra ver1 00:08:20.741 07:35:46 -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.741 07:35:46 -- scripts/common.sh@336 -- # read -ra ver2 00:08:20.741 07:35:46 -- scripts/common.sh@337 -- # local 'op=<' 00:08:20.741 07:35:46 -- scripts/common.sh@339 -- # ver1_l=2 00:08:20.741 07:35:46 -- scripts/common.sh@340 -- # ver2_l=1 00:08:20.741 07:35:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:20.741 07:35:46 -- scripts/common.sh@343 -- # case "$op" in 00:08:20.741 07:35:46 -- scripts/common.sh@344 -- # : 1 00:08:20.741 07:35:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:20.741 07:35:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.741 07:35:46 -- scripts/common.sh@364 -- # decimal 1 00:08:20.741 07:35:46 -- scripts/common.sh@352 -- # local d=1 00:08:20.741 07:35:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.741 07:35:46 -- scripts/common.sh@354 -- # echo 1 00:08:20.741 07:35:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:20.741 07:35:46 -- scripts/common.sh@365 -- # decimal 2 00:08:20.741 07:35:46 -- scripts/common.sh@352 -- # local d=2 00:08:20.741 07:35:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.741 07:35:46 -- scripts/common.sh@354 -- # echo 2 00:08:20.741 07:35:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.741 07:35:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.741 07:35:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.741 07:35:46 -- scripts/common.sh@367 -- # return 0 00:08:20.741 07:35:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.741 07:35:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.741 --rc genhtml_branch_coverage=1 00:08:20.741 --rc genhtml_function_coverage=1 00:08:20.741 --rc genhtml_legend=1 00:08:20.741 --rc geninfo_all_blocks=1 00:08:20.741 --rc geninfo_unexecuted_blocks=1 00:08:20.741 00:08:20.741 ' 00:08:20.741 07:35:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.741 --rc genhtml_branch_coverage=1 00:08:20.741 --rc genhtml_function_coverage=1 00:08:20.741 --rc genhtml_legend=1 00:08:20.741 --rc geninfo_all_blocks=1 00:08:20.741 --rc geninfo_unexecuted_blocks=1 00:08:20.741 00:08:20.741 ' 00:08:20.741 07:35:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.741 --rc genhtml_branch_coverage=1 00:08:20.741 --rc genhtml_function_coverage=1 00:08:20.741 --rc genhtml_legend=1 00:08:20.741 --rc geninfo_all_blocks=1 00:08:20.741 --rc geninfo_unexecuted_blocks=1 00:08:20.741 00:08:20.741 ' 00:08:20.741 07:35:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.741 --rc genhtml_branch_coverage=1 00:08:20.741 --rc genhtml_function_coverage=1 00:08:20.741 --rc genhtml_legend=1 00:08:20.741 --rc geninfo_all_blocks=1 00:08:20.741 --rc geninfo_unexecuted_blocks=1 00:08:20.741 00:08:20.741 ' 00:08:20.741 07:35:46 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.741 07:35:46 -- nvmf/common.sh@7 -- # uname -s 00:08:20.999 07:35:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.999 07:35:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.999 07:35:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.999 07:35:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.999 07:35:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.999 07:35:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.999 07:35:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.999 07:35:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.999 07:35:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.999 07:35:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.999 07:35:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:20.999 07:35:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:20.999 07:35:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.999 07:35:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.999 07:35:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:20.999 07:35:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.999 07:35:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.999 07:35:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.999 07:35:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.999 07:35:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.999 07:35:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.000 07:35:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.000 07:35:46 -- paths/export.sh@5 -- # export PATH 00:08:21.000 07:35:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.000 07:35:46 -- nvmf/common.sh@46 -- # : 0 00:08:21.000 07:35:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:21.000 07:35:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:21.000 07:35:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:21.000 07:35:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.000 07:35:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.000 07:35:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:21.000 07:35:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:21.000 07:35:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:21.000 07:35:46 -- target/zcopy.sh@12 -- # nvmftestinit 00:08:21.000 07:35:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:21.000 07:35:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.000 07:35:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:21.000 07:35:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:21.000 07:35:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:21.000 07:35:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.000 07:35:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.000 07:35:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.000 07:35:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:21.000 07:35:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:21.000 07:35:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:21.000 07:35:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:21.000 07:35:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:21.000 07:35:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:21.000 07:35:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.000 07:35:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.000 07:35:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:21.000 07:35:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:21.000 07:35:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:21.000 07:35:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:21.000 07:35:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:21.000 07:35:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.000 07:35:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:21.000 07:35:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:21.000 07:35:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:21.000 07:35:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:21.000 07:35:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:21.000 07:35:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:21.000 Cannot find device "nvmf_tgt_br" 00:08:21.000 07:35:46 -- nvmf/common.sh@154 -- # true 00:08:21.000 07:35:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.000 Cannot find device "nvmf_tgt_br2" 00:08:21.000 07:35:46 -- nvmf/common.sh@155 -- # true 00:08:21.000 07:35:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:21.000 07:35:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:21.000 Cannot find device "nvmf_tgt_br" 00:08:21.000 07:35:46 -- nvmf/common.sh@157 -- # true 00:08:21.000 07:35:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:21.000 Cannot find device "nvmf_tgt_br2" 00:08:21.000 07:35:46 -- nvmf/common.sh@158 -- # true 00:08:21.000 07:35:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:21.000 07:35:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:21.000 07:35:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.000 07:35:46 -- nvmf/common.sh@161 -- # true 00:08:21.000 07:35:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.000 07:35:46 -- nvmf/common.sh@162 -- # true 00:08:21.000 07:35:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:21.000 07:35:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:21.000 07:35:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:21.000 07:35:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:21.000 07:35:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:21.000 07:35:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:21.000 07:35:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:21.000 07:35:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:21.000 07:35:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:21.000 07:35:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:21.000 07:35:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:21.000 07:35:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:21.000 07:35:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:21.000 07:35:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:21.258 07:35:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:21.258 07:35:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:21.258 07:35:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:21.258 07:35:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:21.258 07:35:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:21.258 07:35:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:21.258 07:35:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:21.258 07:35:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:21.258 07:35:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:21.258 07:35:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:21.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:21.258 00:08:21.258 --- 10.0.0.2 ping statistics --- 00:08:21.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.258 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:21.258 07:35:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:21.258 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:21.258 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:08:21.258 00:08:21.258 --- 10.0.0.3 ping statistics --- 00:08:21.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.258 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:21.258 07:35:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:21.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:21.258 00:08:21.258 --- 10.0.0.1 ping statistics --- 00:08:21.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.258 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:21.258 07:35:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.258 07:35:46 -- nvmf/common.sh@421 -- # return 0 00:08:21.258 07:35:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:21.258 07:35:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.258 07:35:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:21.258 07:35:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:21.258 07:35:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.258 07:35:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:21.258 07:35:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:21.258 07:35:46 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:21.259 07:35:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:21.259 07:35:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.259 07:35:46 -- common/autotest_common.sh@10 -- # set +x 00:08:21.259 07:35:46 -- nvmf/common.sh@469 -- # nvmfpid=62593 00:08:21.259 07:35:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:21.259 07:35:46 -- nvmf/common.sh@470 -- # waitforlisten 62593 00:08:21.259 07:35:46 -- common/autotest_common.sh@829 -- # '[' -z 62593 ']' 00:08:21.259 07:35:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.259 07:35:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.259 07:35:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.259 07:35:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.259 07:35:46 -- common/autotest_common.sh@10 -- # set +x 00:08:21.259 [2024-12-02 07:35:46.782845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.259 [2024-12-02 07:35:46.782937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.517 [2024-12-02 07:35:46.923281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.517 [2024-12-02 07:35:46.969795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:21.517 [2024-12-02 07:35:46.969911] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.517 [2024-12-02 07:35:46.969923] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.517 [2024-12-02 07:35:46.969930] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.517 [2024-12-02 07:35:46.969958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.081 07:35:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.081 07:35:47 -- common/autotest_common.sh@862 -- # return 0 00:08:22.081 07:35:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:22.081 07:35:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.081 07:35:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 07:35:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.081 07:35:47 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:22.081 07:35:47 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:22.081 07:35:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 07:35:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 [2024-12-02 07:35:47.667686] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.081 07:35:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 07:35:47 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:22.081 07:35:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 07:35:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 07:35:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 07:35:47 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.081 07:35:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 07:35:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 [2024-12-02 07:35:47.683830] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.081 07:35:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 07:35:47 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.081 07:35:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 07:35:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.081 07:35:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.081 07:35:47 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:22.081 07:35:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.081 07:35:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.339 malloc0 00:08:22.339 07:35:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.339 07:35:47 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:22.339 07:35:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.339 07:35:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.339 07:35:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.339 07:35:47 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:22.339 07:35:47 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:22.339 07:35:47 -- nvmf/common.sh@520 -- # config=() 00:08:22.339 07:35:47 -- nvmf/common.sh@520 -- # local subsystem config 00:08:22.339 07:35:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:22.339 07:35:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:22.339 { 00:08:22.339 "params": { 00:08:22.339 "name": "Nvme$subsystem", 00:08:22.339 "trtype": "$TEST_TRANSPORT", 00:08:22.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.339 "adrfam": "ipv4", 00:08:22.339 "trsvcid": "$NVMF_PORT", 00:08:22.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.339 "hdgst": ${hdgst:-false}, 00:08:22.339 "ddgst": ${ddgst:-false} 00:08:22.339 }, 00:08:22.339 "method": "bdev_nvme_attach_controller" 00:08:22.339 } 00:08:22.339 EOF 00:08:22.339 )") 00:08:22.339 07:35:47 -- nvmf/common.sh@542 -- # cat 00:08:22.339 07:35:47 -- nvmf/common.sh@544 -- # jq . 00:08:22.339 07:35:47 -- nvmf/common.sh@545 -- # IFS=, 00:08:22.339 07:35:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:22.339 "params": { 00:08:22.339 "name": "Nvme1", 00:08:22.339 "trtype": "tcp", 00:08:22.339 "traddr": "10.0.0.2", 00:08:22.339 "adrfam": "ipv4", 00:08:22.339 "trsvcid": "4420", 00:08:22.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.339 "hdgst": false, 00:08:22.339 "ddgst": false 00:08:22.339 }, 00:08:22.339 "method": "bdev_nvme_attach_controller" 00:08:22.339 }' 00:08:22.339 [2024-12-02 07:35:47.768110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.339 [2024-12-02 07:35:47.768205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62625 ] 00:08:22.339 [2024-12-02 07:35:47.907414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.596 [2024-12-02 07:35:47.973540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.596 Running I/O for 10 seconds... 00:08:32.564 00:08:32.564 Latency(us) 00:08:32.564 [2024-12-02T07:35:58.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.564 [2024-12-02T07:35:58.188Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:32.564 Verification LBA range: start 0x0 length 0x1000 00:08:32.564 Nvme1n1 : 10.01 10965.90 85.67 0.00 0.00 11643.14 1429.88 19660.80 00:08:32.564 [2024-12-02T07:35:58.188Z] =================================================================================================================== 00:08:32.564 [2024-12-02T07:35:58.188Z] Total : 10965.90 85.67 0.00 0.00 11643.14 1429.88 19660.80 00:08:32.823 07:35:58 -- target/zcopy.sh@39 -- # perfpid=62738 00:08:32.823 07:35:58 -- target/zcopy.sh@41 -- # xtrace_disable 00:08:32.823 07:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:32.823 07:35:58 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:32.823 07:35:58 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:32.823 07:35:58 -- nvmf/common.sh@520 -- # config=() 00:08:32.823 07:35:58 -- nvmf/common.sh@520 -- # local subsystem config 00:08:32.823 07:35:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:32.823 07:35:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:32.823 { 00:08:32.823 "params": { 00:08:32.823 "name": "Nvme$subsystem", 00:08:32.823 "trtype": "$TEST_TRANSPORT", 00:08:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.823 "adrfam": "ipv4", 00:08:32.823 "trsvcid": "$NVMF_PORT", 00:08:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.823 "hdgst": ${hdgst:-false}, 00:08:32.823 "ddgst": ${ddgst:-false} 00:08:32.823 }, 00:08:32.823 "method": "bdev_nvme_attach_controller" 00:08:32.823 } 00:08:32.823 EOF 00:08:32.823 )") 00:08:32.823 07:35:58 -- nvmf/common.sh@542 -- # cat 00:08:32.823 [2024-12-02 07:35:58.288380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.288455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 07:35:58 -- nvmf/common.sh@544 -- # jq . 00:08:32.823 07:35:58 -- nvmf/common.sh@545 -- # IFS=, 00:08:32.823 07:35:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:32.823 "params": { 00:08:32.823 "name": "Nvme1", 00:08:32.823 "trtype": "tcp", 00:08:32.823 "traddr": "10.0.0.2", 00:08:32.823 "adrfam": "ipv4", 00:08:32.823 "trsvcid": "4420", 00:08:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:32.823 "hdgst": false, 00:08:32.823 "ddgst": false 00:08:32.823 }, 00:08:32.823 "method": "bdev_nvme_attach_controller" 00:08:32.823 }' 00:08:32.823 [2024-12-02 07:35:58.300282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.300349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 [2024-12-02 07:35:58.312282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.312328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 [2024-12-02 07:35:58.322227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.823 [2024-12-02 07:35:58.322302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62738 ] 00:08:32.823 [2024-12-02 07:35:58.324295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.324340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 [2024-12-02 07:35:58.336296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.336357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 [2024-12-02 07:35:58.348297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.348360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 [2024-12-02 07:35:58.360316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.360360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 [2024-12-02 07:35:58.372309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.823 [2024-12-02 07:35:58.372344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.823 [2024-12-02 07:35:58.384314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.824 [2024-12-02 07:35:58.384349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.824 [2024-12-02 07:35:58.396314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.824 [2024-12-02 07:35:58.396351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.824 [2024-12-02 07:35:58.408307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.824 [2024-12-02 07:35:58.408350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.824 [2024-12-02 07:35:58.420333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.824 [2024-12-02 07:35:58.420368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.824 [2024-12-02 07:35:58.432312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.824 [2024-12-02 07:35:58.432358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.824 [2024-12-02 07:35:58.444332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.824 [2024-12-02 07:35:58.444377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.450216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.083 [2024-12-02 07:35:58.456361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.456404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.468354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.468392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.480376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.480423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.492363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.492413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.501326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.083 [2024-12-02 07:35:58.504373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.504424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.516416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.516443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.528396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.528445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.540395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.540443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.552395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.552443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.564407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.564452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.576426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.576468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.588438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.588482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.600438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.600481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.612438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.612480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.624455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.624500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 Running I/O for 5 seconds... 00:08:33.083 [2024-12-02 07:35:58.636637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.636680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.652168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.652212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.667976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.668019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.685231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.685275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.083 [2024-12-02 07:35:58.701023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.083 [2024-12-02 07:35:58.701068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.342 [2024-12-02 07:35:58.718577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.342 [2024-12-02 07:35:58.718620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.342 [2024-12-02 07:35:58.735349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.342 [2024-12-02 07:35:58.735409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.751622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.751666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.769206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.769252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.784292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.784384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.801153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.801196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.817640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.817685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.834258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.834301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.851115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.851159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.867682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.867727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.884771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.884814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.901253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.901296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.917088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.917132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.934395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.934438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.343 [2024-12-02 07:35:58.951549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.343 [2024-12-02 07:35:58.951592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.601 [2024-12-02 07:35:58.967841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.601 [2024-12-02 07:35:58.967885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.601 [2024-12-02 07:35:58.985239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.601 [2024-12-02 07:35:58.985283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.601 [2024-12-02 07:35:59.000834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.601 [2024-12-02 07:35:59.000878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.601 [2024-12-02 07:35:59.017622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.601 [2024-12-02 07:35:59.017667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.601 [2024-12-02 07:35:59.034082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.601 [2024-12-02 07:35:59.034126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.601 [2024-12-02 07:35:59.050641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.601 [2024-12-02 07:35:59.050702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.067475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.067519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.083943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.083987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.100618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.100663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.117676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.117720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.134725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.134768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.151068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.151112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.167711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.167754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.184106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.184149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.200625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.200671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.602 [2024-12-02 07:35:59.217548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.602 [2024-12-02 07:35:59.217592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.234464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.234522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.250818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.250862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.267507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.267551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.283890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.283933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.301049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.301093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.316919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.316962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.333786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.333831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.350280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.350334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.367392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.367437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.383551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.383596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.400544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.400589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.416952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.416995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.434615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.434659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.450289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.450362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.460983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.461027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.867 [2024-12-02 07:35:59.476669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.867 [2024-12-02 07:35:59.476712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.494549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.494593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.509431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.509476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.525407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.525451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.543001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.543044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.558923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.558968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.576048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.576091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.591202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.591247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.606515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.606559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.624802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.624832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.640262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.640307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.658070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.658114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.674742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.674786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.691757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.691800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.708801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.708844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.724814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.724857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.741386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.741430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.758681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.758740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.165 [2024-12-02 07:35:59.775106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.165 [2024-12-02 07:35:59.775150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.792171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.792199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.807364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.807413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.823152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.823195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.839987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.840031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.856178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.856222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.873089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.873133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.889514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.889558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.906323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.906382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.922624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.922670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.939388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.939433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.956527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.956571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.431 [2024-12-02 07:35:59.973084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.431 [2024-12-02 07:35:59.973129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.432 [2024-12-02 07:35:59.989735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.432 [2024-12-02 07:35:59.989779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.432 [2024-12-02 07:36:00.007484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.432 [2024-12-02 07:36:00.007529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.432 [2024-12-02 07:36:00.021810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.432 [2024-12-02 07:36:00.021856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.432 [2024-12-02 07:36:00.037037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.432 [2024-12-02 07:36:00.037082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.054834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.054882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.069792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.069835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.085626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.085671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.102390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.102423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.118521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.118565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.135779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.135823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.152698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.152743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.169339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.169381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.185913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.185957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.202880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.202940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.218746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.218791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.235210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.235254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.252546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.252590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.266545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.266589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.281211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.281255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.290322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.290355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.305868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.305911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.701 [2024-12-02 07:36:00.317105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.701 [2024-12-02 07:36:00.317149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.333556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.333601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.350991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.351035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.365444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.365488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.380145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.380188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.391446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.391491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.407710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.407753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.423011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.423056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.440565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.440610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.456245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.456289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.473902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.473947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.488921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.488966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.507202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.507247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.523553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.523597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.539193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.539238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.556147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.556190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.961 [2024-12-02 07:36:00.573317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.961 [2024-12-02 07:36:00.573386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.220 [2024-12-02 07:36:00.589420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.220 [2024-12-02 07:36:00.589464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.220 [2024-12-02 07:36:00.606837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.220 [2024-12-02 07:36:00.606881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.220 [2024-12-02 07:36:00.621770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.220 [2024-12-02 07:36:00.621814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.220 [2024-12-02 07:36:00.632533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.220 [2024-12-02 07:36:00.632578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.648743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.648788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.665030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.665073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.683345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.683405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.697981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.698026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.714021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.714068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.731886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.731930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.746574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.746620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.765149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.765194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.780202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.780246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.790824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.790869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.807518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.807550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.822663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.822722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.221 [2024-12-02 07:36:00.831514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.221 [2024-12-02 07:36:00.831560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.847105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.847151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.862619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.862662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.878870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.878915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.895929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.895973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.912492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.912537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.928954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.928999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.946579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.946622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.963725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.963769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.980214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.980258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:00.997214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:00.997259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:01.012982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:01.013029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:01.030811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:01.030855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:01.045038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:01.045083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:01.061060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:01.061104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:01.076999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:01.077044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.479 [2024-12-02 07:36:01.094901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.479 [2024-12-02 07:36:01.094946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.737 [2024-12-02 07:36:01.110073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.737 [2024-12-02 07:36:01.110118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.737 [2024-12-02 07:36:01.127163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.127208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.142399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.142442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.160231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.160276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.175221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.175266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.191406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.191449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.207801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.207847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.224168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.224213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.241933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.241978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.257183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.257227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.274259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.274304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.291238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.291283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.307022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.307066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.325070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.325115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.341167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.341213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.738 [2024-12-02 07:36:01.357803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.738 [2024-12-02 07:36:01.357850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.374314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.374386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.392102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.392147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.406766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.406811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.424630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.424675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.440059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.440104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.450865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.450909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.466719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.466763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.483595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.483640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.499343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.499388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.510631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.510690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.526029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.526074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.537176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.537223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.553461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.553505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.571035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.571079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.586218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.586265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.604112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.604158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.997 [2024-12-02 07:36:01.619139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.997 [2024-12-02 07:36:01.619183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.627818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.627863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.642858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.642903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.658964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.659008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.675650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.675695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.693105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.693150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.708477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.708521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.725980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.726024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.741295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.741370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.758859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.758904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.774251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.774283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.792164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.792208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.807275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.807345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.816526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.816570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.832140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.832188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.256 [2024-12-02 07:36:01.848257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.256 [2024-12-02 07:36:01.848302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.257 [2024-12-02 07:36:01.865900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.257 [2024-12-02 07:36:01.865945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.515 [2024-12-02 07:36:01.881314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.515 [2024-12-02 07:36:01.881388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.515 [2024-12-02 07:36:01.899220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.515 [2024-12-02 07:36:01.899265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.515 [2024-12-02 07:36:01.914281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.515 [2024-12-02 07:36:01.914342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.515 [2024-12-02 07:36:01.931840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.515 [2024-12-02 07:36:01.931884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.515 [2024-12-02 07:36:01.948338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.515 [2024-12-02 07:36:01.948379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.515 [2024-12-02 07:36:01.965183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.515 [2024-12-02 07:36:01.965228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.515 [2024-12-02 07:36:01.982322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.515 [2024-12-02 07:36:01.982364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:01.999130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:01.999174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.016047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.016090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.032892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.032936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.049998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.050043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.065808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.065853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.083187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.083231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.100086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.100130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.115528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.115558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.516 [2024-12-02 07:36:02.133688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.516 [2024-12-02 07:36:02.133736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.775 [2024-12-02 07:36:02.149062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.775 [2024-12-02 07:36:02.149106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.775 [2024-12-02 07:36:02.160775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.160819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.177383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.177428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.192731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.192790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.210217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.210250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.227073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.227118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.243008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.243053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.260820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.260865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.276734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.276780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.294249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.294281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.310077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.310122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.326698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.326743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.344008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.344052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.359541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.359585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.376935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.376979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.776 [2024-12-02 07:36:02.393454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.776 [2024-12-02 07:36:02.393500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.409484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.409528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.427057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.427102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.442741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.442786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.460144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.460188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.474965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.475011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.490924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.490969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.507269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.507322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.523655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.523700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.540177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.540222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.555734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.555779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.564959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.565003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.580514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.580559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.596742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.596785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.613789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.613834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.629157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.629202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.638540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.638585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.035 [2024-12-02 07:36:02.653606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.035 [2024-12-02 07:36:02.653652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.669200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.669244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.680041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.680085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.696100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.696144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.713060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.713089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.728972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.729017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.745875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.745918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.762346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.762379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.780500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.780538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.796300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.796390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.813933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.813977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.829798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.829842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.846496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.846527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.863069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.863097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.872198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.872242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.887654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.887731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.295 [2024-12-02 07:36:02.905181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.295 [2024-12-02 07:36:02.905226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:02.920890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:02.920935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:02.939119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:02.939164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:02.954997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:02.955040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:02.972805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:02.972849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:02.986733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:02.986778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.003294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.003348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.017793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.017836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.033341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.033384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.050667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.050712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.066423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.066454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.084147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.084191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.099553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.099598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.116810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.116854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.133728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.133772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.149079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.149123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.554 [2024-12-02 07:36:03.160426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.554 [2024-12-02 07:36:03.160470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.176140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.176186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.191991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.192035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.209884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.209930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.224800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.224844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.235559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.235604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.251268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.251321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.267537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.267581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.284054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.284097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.300808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.300853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.317355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.317398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.334241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.334271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.351508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.351551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.367047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.367091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.385006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.385051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.399578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.399622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.415197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.415243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.813 [2024-12-02 07:36:03.432766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.813 [2024-12-02 07:36:03.432810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.448074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.448119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.459186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.459229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.474657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.474715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.491757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.491801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.507171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.507215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.525190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.525233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.539808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.539852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.555907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.555951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.572254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.572298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.588808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.588852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.606482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.606525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.621095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.621140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.631545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.631590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 00:08:38.072 Latency(us) 00:08:38.072 [2024-12-02T07:36:03.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.072 [2024-12-02T07:36:03.696Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:38.072 Nvme1n1 : 5.01 13447.20 105.06 0.00 0.00 9508.16 3902.37 19899.11 00:08:38.072 [2024-12-02T07:36:03.696Z] =================================================================================================================== 00:08:38.072 [2024-12-02T07:36:03.696Z] Total : 13447.20 105.06 0.00 0.00 9508.16 3902.37 19899.11 00:08:38.072 [2024-12-02 07:36:03.641625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.641669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.653618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.653661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.665658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.665727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.677656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.677725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.072 [2024-12-02 07:36:03.689656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.072 [2024-12-02 07:36:03.689722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.701684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.701733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.713677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.713725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.725645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.725701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.737640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.737679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.749689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.749739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.761659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.761716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.773652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.773706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.785689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.785735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.797681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.797722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 [2024-12-02 07:36:03.809670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.332 [2024-12-02 07:36:03.809707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.332 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62738) - No such process 00:08:38.332 07:36:03 -- target/zcopy.sh@49 -- # wait 62738 00:08:38.332 07:36:03 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.332 07:36:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.332 07:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:38.332 07:36:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.332 07:36:03 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.332 07:36:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.332 07:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:38.332 delay0 00:08:38.332 07:36:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.332 07:36:03 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:38.332 07:36:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.332 07:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:38.332 07:36:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.332 07:36:03 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:38.590 [2024-12-02 07:36:04.006598] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:45.151 Initializing NVMe Controllers 00:08:45.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:45.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:45.151 Initialization complete. Launching workers. 00:08:45.151 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:08:45.151 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 33 00:08:45.151 success 252, unsuccess 122, failed 0 00:08:45.151 07:36:10 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:45.151 07:36:10 -- target/zcopy.sh@60 -- # nvmftestfini 00:08:45.151 07:36:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:45.151 07:36:10 -- nvmf/common.sh@116 -- # sync 00:08:45.151 07:36:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:45.151 07:36:10 -- nvmf/common.sh@119 -- # set +e 00:08:45.151 07:36:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:45.151 07:36:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:45.151 rmmod nvme_tcp 00:08:45.151 rmmod nvme_fabrics 00:08:45.151 rmmod nvme_keyring 00:08:45.151 07:36:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:45.151 07:36:10 -- nvmf/common.sh@123 -- # set -e 00:08:45.151 07:36:10 -- nvmf/common.sh@124 -- # return 0 00:08:45.151 07:36:10 -- nvmf/common.sh@477 -- # '[' -n 62593 ']' 00:08:45.151 07:36:10 -- nvmf/common.sh@478 -- # killprocess 62593 00:08:45.151 07:36:10 -- common/autotest_common.sh@936 -- # '[' -z 62593 ']' 00:08:45.151 07:36:10 -- common/autotest_common.sh@940 -- # kill -0 62593 00:08:45.151 07:36:10 -- common/autotest_common.sh@941 -- # uname 00:08:45.151 07:36:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.151 07:36:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62593 00:08:45.151 07:36:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:45.151 07:36:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:45.151 killing process with pid 62593 00:08:45.151 07:36:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62593' 00:08:45.151 07:36:10 -- common/autotest_common.sh@955 -- # kill 62593 00:08:45.151 07:36:10 -- common/autotest_common.sh@960 -- # wait 62593 00:08:45.151 07:36:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:45.151 07:36:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:45.151 07:36:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:45.151 07:36:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.151 07:36:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:45.151 07:36:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.151 07:36:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.151 07:36:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.151 07:36:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:45.151 00:08:45.151 real 0m24.168s 00:08:45.151 user 0m39.681s 00:08:45.151 sys 0m6.485s 00:08:45.151 07:36:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.151 07:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.151 ************************************ 00:08:45.151 END TEST nvmf_zcopy 00:08:45.151 ************************************ 00:08:45.151 07:36:10 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:45.151 07:36:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:45.151 07:36:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.151 07:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.151 ************************************ 00:08:45.151 START TEST nvmf_nmic 00:08:45.151 ************************************ 00:08:45.151 07:36:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:45.151 * Looking for test storage... 00:08:45.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.151 07:36:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:45.151 07:36:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:45.151 07:36:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:45.151 07:36:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:45.151 07:36:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:45.151 07:36:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:45.151 07:36:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:45.151 07:36:10 -- scripts/common.sh@335 -- # IFS=.-: 00:08:45.151 07:36:10 -- scripts/common.sh@335 -- # read -ra ver1 00:08:45.151 07:36:10 -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.151 07:36:10 -- scripts/common.sh@336 -- # read -ra ver2 00:08:45.151 07:36:10 -- scripts/common.sh@337 -- # local 'op=<' 00:08:45.151 07:36:10 -- scripts/common.sh@339 -- # ver1_l=2 00:08:45.151 07:36:10 -- scripts/common.sh@340 -- # ver2_l=1 00:08:45.151 07:36:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:45.152 07:36:10 -- scripts/common.sh@343 -- # case "$op" in 00:08:45.152 07:36:10 -- scripts/common.sh@344 -- # : 1 00:08:45.152 07:36:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:45.152 07:36:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.152 07:36:10 -- scripts/common.sh@364 -- # decimal 1 00:08:45.152 07:36:10 -- scripts/common.sh@352 -- # local d=1 00:08:45.152 07:36:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.152 07:36:10 -- scripts/common.sh@354 -- # echo 1 00:08:45.152 07:36:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:45.152 07:36:10 -- scripts/common.sh@365 -- # decimal 2 00:08:45.152 07:36:10 -- scripts/common.sh@352 -- # local d=2 00:08:45.152 07:36:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.152 07:36:10 -- scripts/common.sh@354 -- # echo 2 00:08:45.152 07:36:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:45.152 07:36:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:45.152 07:36:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:45.152 07:36:10 -- scripts/common.sh@367 -- # return 0 00:08:45.152 07:36:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.152 07:36:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:45.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.152 --rc genhtml_branch_coverage=1 00:08:45.152 --rc genhtml_function_coverage=1 00:08:45.152 --rc genhtml_legend=1 00:08:45.152 --rc geninfo_all_blocks=1 00:08:45.152 --rc geninfo_unexecuted_blocks=1 00:08:45.152 00:08:45.152 ' 00:08:45.152 07:36:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:45.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.152 --rc genhtml_branch_coverage=1 00:08:45.152 --rc genhtml_function_coverage=1 00:08:45.152 --rc genhtml_legend=1 00:08:45.152 --rc geninfo_all_blocks=1 00:08:45.152 --rc geninfo_unexecuted_blocks=1 00:08:45.152 00:08:45.152 ' 00:08:45.152 07:36:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:45.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.152 --rc genhtml_branch_coverage=1 00:08:45.152 --rc genhtml_function_coverage=1 00:08:45.152 --rc genhtml_legend=1 00:08:45.152 --rc geninfo_all_blocks=1 00:08:45.152 --rc geninfo_unexecuted_blocks=1 00:08:45.152 00:08:45.152 ' 00:08:45.152 07:36:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:45.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.152 --rc genhtml_branch_coverage=1 00:08:45.152 --rc genhtml_function_coverage=1 00:08:45.152 --rc genhtml_legend=1 00:08:45.152 --rc geninfo_all_blocks=1 00:08:45.152 --rc geninfo_unexecuted_blocks=1 00:08:45.152 00:08:45.152 ' 00:08:45.152 07:36:10 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.152 07:36:10 -- nvmf/common.sh@7 -- # uname -s 00:08:45.152 07:36:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.152 07:36:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.152 07:36:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.152 07:36:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.152 07:36:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.152 07:36:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.152 07:36:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.152 07:36:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.152 07:36:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.152 07:36:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.152 07:36:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:45.152 07:36:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:45.152 07:36:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.152 07:36:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.152 07:36:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.152 07:36:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.152 07:36:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.152 07:36:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.152 07:36:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.152 07:36:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.152 07:36:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.152 07:36:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.152 07:36:10 -- paths/export.sh@5 -- # export PATH 00:08:45.152 07:36:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.152 07:36:10 -- nvmf/common.sh@46 -- # : 0 00:08:45.152 07:36:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:45.152 07:36:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:45.152 07:36:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:45.152 07:36:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.152 07:36:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.152 07:36:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:45.152 07:36:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:45.152 07:36:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:45.152 07:36:10 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.152 07:36:10 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.152 07:36:10 -- target/nmic.sh@14 -- # nvmftestinit 00:08:45.152 07:36:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:45.152 07:36:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.152 07:36:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:45.152 07:36:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:45.152 07:36:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:45.152 07:36:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.152 07:36:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.152 07:36:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.152 07:36:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:45.152 07:36:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:45.152 07:36:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:45.152 07:36:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:45.152 07:36:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:45.152 07:36:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:45.152 07:36:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.152 07:36:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.152 07:36:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:45.152 07:36:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:45.152 07:36:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.152 07:36:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.152 07:36:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.152 07:36:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.152 07:36:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.152 07:36:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.152 07:36:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.152 07:36:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.152 07:36:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:45.152 07:36:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:45.152 Cannot find device "nvmf_tgt_br" 00:08:45.152 07:36:10 -- nvmf/common.sh@154 -- # true 00:08:45.152 07:36:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.152 Cannot find device "nvmf_tgt_br2" 00:08:45.152 07:36:10 -- nvmf/common.sh@155 -- # true 00:08:45.152 07:36:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:45.152 07:36:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:45.152 Cannot find device "nvmf_tgt_br" 00:08:45.152 07:36:10 -- nvmf/common.sh@157 -- # true 00:08:45.152 07:36:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:45.152 Cannot find device "nvmf_tgt_br2" 00:08:45.152 07:36:10 -- nvmf/common.sh@158 -- # true 00:08:45.152 07:36:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:45.152 07:36:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:45.152 07:36:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.153 07:36:10 -- nvmf/common.sh@161 -- # true 00:08:45.153 07:36:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.153 07:36:10 -- nvmf/common.sh@162 -- # true 00:08:45.153 07:36:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.153 07:36:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.153 07:36:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.153 07:36:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.153 07:36:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.412 07:36:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.412 07:36:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.412 07:36:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:45.412 07:36:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:45.412 07:36:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:45.412 07:36:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:45.412 07:36:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:45.412 07:36:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:45.412 07:36:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.412 07:36:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.412 07:36:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.412 07:36:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:45.412 07:36:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:45.412 07:36:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.412 07:36:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.412 07:36:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.412 07:36:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.412 07:36:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.412 07:36:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:45.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:08:45.412 00:08:45.412 --- 10.0.0.2 ping statistics --- 00:08:45.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.412 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:45.412 07:36:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:45.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:45.412 00:08:45.412 --- 10.0.0.3 ping statistics --- 00:08:45.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.412 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:45.412 07:36:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:45.412 00:08:45.412 --- 10.0.0.1 ping statistics --- 00:08:45.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.412 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:45.412 07:36:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.412 07:36:10 -- nvmf/common.sh@421 -- # return 0 00:08:45.412 07:36:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:45.412 07:36:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.412 07:36:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:45.412 07:36:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:45.412 07:36:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.412 07:36:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:45.412 07:36:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:45.412 07:36:10 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:45.413 07:36:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:45.413 07:36:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.413 07:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.413 07:36:10 -- nvmf/common.sh@469 -- # nvmfpid=63064 00:08:45.413 07:36:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.413 07:36:10 -- nvmf/common.sh@470 -- # waitforlisten 63064 00:08:45.413 07:36:10 -- common/autotest_common.sh@829 -- # '[' -z 63064 ']' 00:08:45.413 07:36:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.413 07:36:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.413 07:36:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.413 07:36:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.413 07:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.413 [2024-12-02 07:36:11.008972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.413 [2024-12-02 07:36:11.009055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.671 [2024-12-02 07:36:11.147530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.672 [2024-12-02 07:36:11.197187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.672 [2024-12-02 07:36:11.197353] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.672 [2024-12-02 07:36:11.197366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.672 [2024-12-02 07:36:11.197374] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.672 [2024-12-02 07:36:11.197556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.672 [2024-12-02 07:36:11.198172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.672 [2024-12-02 07:36:11.198318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.672 [2024-12-02 07:36:11.198343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.607 07:36:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.607 07:36:12 -- common/autotest_common.sh@862 -- # return 0 00:08:46.607 07:36:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.607 07:36:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.607 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 07:36:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.607 07:36:12 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.607 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 [2024-12-02 07:36:12.071747] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.607 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 07:36:12 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.607 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 Malloc0 00:08:46.607 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 07:36:12 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.607 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 07:36:12 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.607 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.607 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.607 07:36:12 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.607 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.607 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.608 [2024-12-02 07:36:12.133976] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.608 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.608 test case1: single bdev can't be used in multiple subsystems 00:08:46.608 07:36:12 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:46.608 07:36:12 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:46.608 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.608 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.608 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.608 07:36:12 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:46.608 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.608 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.608 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.608 07:36:12 -- target/nmic.sh@28 -- # nmic_status=0 00:08:46.608 07:36:12 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:46.608 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.608 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.608 [2024-12-02 07:36:12.157830] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:46.608 [2024-12-02 07:36:12.157863] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:46.608 [2024-12-02 07:36:12.157890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.608 request: 00:08:46.608 { 00:08:46.608 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:46.608 "namespace": { 00:08:46.608 "bdev_name": "Malloc0" 00:08:46.608 }, 00:08:46.608 "method": "nvmf_subsystem_add_ns", 00:08:46.608 "req_id": 1 00:08:46.608 } 00:08:46.608 Got JSON-RPC error response 00:08:46.608 response: 00:08:46.608 { 00:08:46.608 "code": -32602, 00:08:46.608 "message": "Invalid parameters" 00:08:46.608 } 00:08:46.608 07:36:12 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:46.608 07:36:12 -- target/nmic.sh@29 -- # nmic_status=1 00:08:46.608 07:36:12 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:46.608 Adding namespace failed - expected result. 00:08:46.608 07:36:12 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:46.608 test case2: host connect to nvmf target in multiple paths 00:08:46.608 07:36:12 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:46.608 07:36:12 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:46.608 07:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.608 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:08:46.608 [2024-12-02 07:36:12.169944] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:46.608 07:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.608 07:36:12 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:46.866 07:36:12 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:46.866 07:36:12 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.866 07:36:12 -- common/autotest_common.sh@1187 -- # local i=0 00:08:46.866 07:36:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.866 07:36:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:46.866 07:36:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:49.400 07:36:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:49.400 07:36:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:49.400 07:36:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.400 07:36:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:49.400 07:36:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.400 07:36:14 -- common/autotest_common.sh@1197 -- # return 0 00:08:49.400 07:36:14 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:49.400 [global] 00:08:49.400 thread=1 00:08:49.400 invalidate=1 00:08:49.400 rw=write 00:08:49.400 time_based=1 00:08:49.400 runtime=1 00:08:49.400 ioengine=libaio 00:08:49.400 direct=1 00:08:49.400 bs=4096 00:08:49.400 iodepth=1 00:08:49.400 norandommap=0 00:08:49.400 numjobs=1 00:08:49.400 00:08:49.400 verify_dump=1 00:08:49.400 verify_backlog=512 00:08:49.400 verify_state_save=0 00:08:49.400 do_verify=1 00:08:49.400 verify=crc32c-intel 00:08:49.400 [job0] 00:08:49.400 filename=/dev/nvme0n1 00:08:49.400 Could not set queue depth (nvme0n1) 00:08:49.400 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.400 fio-3.35 00:08:49.400 Starting 1 thread 00:08:50.334 00:08:50.334 job0: (groupid=0, jobs=1): err= 0: pid=63156: Mon Dec 2 07:36:15 2024 00:08:50.334 read: IOPS=3196, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:08:50.334 slat (nsec): min=10745, max=61036, avg=14165.57, stdev=4638.46 00:08:50.334 clat (usec): min=121, max=7228, avg=164.05, stdev=143.30 00:08:50.334 lat (usec): min=136, max=7243, avg=178.22, stdev=143.56 00:08:50.334 clat percentiles (usec): 00:08:50.334 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:08:50.334 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:08:50.334 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 00:08:50.334 | 99.00th=[ 217], 99.50th=[ 235], 99.90th=[ 1369], 99.95th=[ 2573], 00:08:50.334 | 99.99th=[ 7242] 00:08:50.334 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:50.334 slat (nsec): min=16169, max=88066, avg=20845.63, stdev=6142.00 00:08:50.334 clat (usec): min=70, max=623, avg=95.85, stdev=16.55 00:08:50.334 lat (usec): min=91, max=663, avg=116.70, stdev=18.93 00:08:50.334 clat percentiles (usec): 00:08:50.334 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 87], 00:08:50.334 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 95], 60.00th=[ 97], 00:08:50.334 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 118], 00:08:50.334 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 258], 99.95th=[ 445], 00:08:50.334 | 99.99th=[ 627] 00:08:50.334 bw ( KiB/s): min=13672, max=13672, per=95.46%, avg=13672.00, stdev= 0.00, samples=1 00:08:50.334 iops : min= 3418, max= 3418, avg=3418.00, stdev= 0.00, samples=1 00:08:50.334 lat (usec) : 100=38.44%, 250=61.31%, 500=0.13%, 750=0.03% 00:08:50.334 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01% 00:08:50.334 cpu : usr=3.50%, sys=8.50%, ctx=6788, majf=0, minf=5 00:08:50.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.334 issued rwts: total=3200,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.334 00:08:50.334 Run status group 0 (all jobs): 00:08:50.334 READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:08:50.334 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:08:50.334 00:08:50.334 Disk stats (read/write): 00:08:50.334 nvme0n1: ios=3001/3072, merge=0/0, ticks=503/320, in_queue=823, util=90.38% 00:08:50.334 07:36:15 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:50.334 07:36:15 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.334 07:36:15 -- common/autotest_common.sh@1208 -- # local i=0 00:08:50.334 07:36:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:50.334 07:36:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.334 07:36:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.334 07:36:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:50.334 07:36:15 -- common/autotest_common.sh@1220 -- # return 0 00:08:50.334 07:36:15 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:50.334 07:36:15 -- target/nmic.sh@53 -- # nvmftestfini 00:08:50.334 07:36:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:50.334 07:36:15 -- nvmf/common.sh@116 -- # sync 00:08:50.334 07:36:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:50.334 07:36:15 -- nvmf/common.sh@119 -- # set +e 00:08:50.334 07:36:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:50.334 07:36:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:50.334 rmmod nvme_tcp 00:08:50.334 rmmod nvme_fabrics 00:08:50.334 rmmod nvme_keyring 00:08:50.334 07:36:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:50.334 07:36:15 -- nvmf/common.sh@123 -- # set -e 00:08:50.334 07:36:15 -- nvmf/common.sh@124 -- # return 0 00:08:50.334 07:36:15 -- nvmf/common.sh@477 -- # '[' -n 63064 ']' 00:08:50.334 07:36:15 -- nvmf/common.sh@478 -- # killprocess 63064 00:08:50.334 07:36:15 -- common/autotest_common.sh@936 -- # '[' -z 63064 ']' 00:08:50.334 07:36:15 -- common/autotest_common.sh@940 -- # kill -0 63064 00:08:50.334 07:36:15 -- common/autotest_common.sh@941 -- # uname 00:08:50.334 07:36:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.334 07:36:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63064 00:08:50.334 07:36:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:50.334 07:36:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:50.334 07:36:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63064' 00:08:50.334 killing process with pid 63064 00:08:50.334 07:36:15 -- common/autotest_common.sh@955 -- # kill 63064 00:08:50.334 07:36:15 -- common/autotest_common.sh@960 -- # wait 63064 00:08:50.593 07:36:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:50.593 07:36:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:50.593 07:36:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:50.593 07:36:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.593 07:36:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:50.593 07:36:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.593 07:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.593 07:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.593 07:36:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:50.593 00:08:50.593 real 0m5.722s 00:08:50.593 user 0m18.639s 00:08:50.593 sys 0m2.100s 00:08:50.593 07:36:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.593 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:50.593 ************************************ 00:08:50.593 END TEST nvmf_nmic 00:08:50.593 ************************************ 00:08:50.593 07:36:16 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.593 07:36:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:50.593 07:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.593 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:50.593 ************************************ 00:08:50.593 START TEST nvmf_fio_target 00:08:50.593 ************************************ 00:08:50.593 07:36:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.853 * Looking for test storage... 00:08:50.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.853 07:36:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:50.853 07:36:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:50.853 07:36:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:50.853 07:36:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:50.853 07:36:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:50.853 07:36:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:50.853 07:36:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:50.853 07:36:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:50.853 07:36:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:50.853 07:36:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.853 07:36:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:50.853 07:36:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:50.853 07:36:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:50.853 07:36:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:50.853 07:36:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:50.853 07:36:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:50.853 07:36:16 -- scripts/common.sh@344 -- # : 1 00:08:50.853 07:36:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:50.853 07:36:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.853 07:36:16 -- scripts/common.sh@364 -- # decimal 1 00:08:50.853 07:36:16 -- scripts/common.sh@352 -- # local d=1 00:08:50.853 07:36:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.853 07:36:16 -- scripts/common.sh@354 -- # echo 1 00:08:50.853 07:36:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:50.853 07:36:16 -- scripts/common.sh@365 -- # decimal 2 00:08:50.853 07:36:16 -- scripts/common.sh@352 -- # local d=2 00:08:50.853 07:36:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.853 07:36:16 -- scripts/common.sh@354 -- # echo 2 00:08:50.853 07:36:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:50.853 07:36:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:50.853 07:36:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:50.853 07:36:16 -- scripts/common.sh@367 -- # return 0 00:08:50.853 07:36:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.853 07:36:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.853 --rc genhtml_branch_coverage=1 00:08:50.853 --rc genhtml_function_coverage=1 00:08:50.853 --rc genhtml_legend=1 00:08:50.853 --rc geninfo_all_blocks=1 00:08:50.853 --rc geninfo_unexecuted_blocks=1 00:08:50.853 00:08:50.853 ' 00:08:50.853 07:36:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.853 --rc genhtml_branch_coverage=1 00:08:50.853 --rc genhtml_function_coverage=1 00:08:50.853 --rc genhtml_legend=1 00:08:50.853 --rc geninfo_all_blocks=1 00:08:50.853 --rc geninfo_unexecuted_blocks=1 00:08:50.853 00:08:50.853 ' 00:08:50.853 07:36:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.853 --rc genhtml_branch_coverage=1 00:08:50.853 --rc genhtml_function_coverage=1 00:08:50.853 --rc genhtml_legend=1 00:08:50.853 --rc geninfo_all_blocks=1 00:08:50.853 --rc geninfo_unexecuted_blocks=1 00:08:50.853 00:08:50.853 ' 00:08:50.853 07:36:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.853 --rc genhtml_branch_coverage=1 00:08:50.853 --rc genhtml_function_coverage=1 00:08:50.853 --rc genhtml_legend=1 00:08:50.853 --rc geninfo_all_blocks=1 00:08:50.853 --rc geninfo_unexecuted_blocks=1 00:08:50.853 00:08:50.853 ' 00:08:50.853 07:36:16 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.853 07:36:16 -- nvmf/common.sh@7 -- # uname -s 00:08:50.853 07:36:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.853 07:36:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.853 07:36:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.853 07:36:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.853 07:36:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.853 07:36:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.853 07:36:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.853 07:36:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.853 07:36:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.853 07:36:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.853 07:36:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:50.853 07:36:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:08:50.853 07:36:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.853 07:36:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.853 07:36:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.853 07:36:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.853 07:36:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.853 07:36:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.853 07:36:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.853 07:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.853 07:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.853 07:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.853 07:36:16 -- paths/export.sh@5 -- # export PATH 00:08:50.853 07:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.853 07:36:16 -- nvmf/common.sh@46 -- # : 0 00:08:50.853 07:36:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.853 07:36:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.854 07:36:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.854 07:36:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.854 07:36:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.854 07:36:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.854 07:36:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.854 07:36:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.854 07:36:16 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.854 07:36:16 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.854 07:36:16 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.854 07:36:16 -- target/fio.sh@16 -- # nvmftestinit 00:08:50.854 07:36:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.854 07:36:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.854 07:36:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.854 07:36:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.854 07:36:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.854 07:36:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.854 07:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.854 07:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.854 07:36:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:50.854 07:36:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:50.854 07:36:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:50.854 07:36:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:50.854 07:36:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:50.854 07:36:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:50.854 07:36:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.854 07:36:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.854 07:36:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.854 07:36:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:50.854 07:36:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.854 07:36:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.854 07:36:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.854 07:36:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.854 07:36:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.854 07:36:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.854 07:36:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.854 07:36:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.854 07:36:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:50.854 07:36:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:50.854 Cannot find device "nvmf_tgt_br" 00:08:50.854 07:36:16 -- nvmf/common.sh@154 -- # true 00:08:50.854 07:36:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.854 Cannot find device "nvmf_tgt_br2" 00:08:50.854 07:36:16 -- nvmf/common.sh@155 -- # true 00:08:50.854 07:36:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:50.854 07:36:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:50.854 Cannot find device "nvmf_tgt_br" 00:08:50.854 07:36:16 -- nvmf/common.sh@157 -- # true 00:08:50.854 07:36:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:50.854 Cannot find device "nvmf_tgt_br2" 00:08:50.854 07:36:16 -- nvmf/common.sh@158 -- # true 00:08:50.854 07:36:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:51.113 07:36:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:51.113 07:36:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.113 07:36:16 -- nvmf/common.sh@161 -- # true 00:08:51.113 07:36:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.113 07:36:16 -- nvmf/common.sh@162 -- # true 00:08:51.113 07:36:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.113 07:36:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.113 07:36:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.113 07:36:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.113 07:36:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.113 07:36:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.113 07:36:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.113 07:36:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:51.113 07:36:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:51.113 07:36:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:51.113 07:36:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:51.113 07:36:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:51.113 07:36:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:51.113 07:36:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.113 07:36:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.113 07:36:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.113 07:36:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:51.113 07:36:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:51.113 07:36:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.113 07:36:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.113 07:36:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.113 07:36:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.113 07:36:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.113 07:36:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:51.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:08:51.113 00:08:51.113 --- 10.0.0.2 ping statistics --- 00:08:51.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.113 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:51.113 07:36:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:51.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:08:51.113 00:08:51.113 --- 10.0.0.3 ping statistics --- 00:08:51.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.113 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:51.113 07:36:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:51.113 00:08:51.113 --- 10.0.0.1 ping statistics --- 00:08:51.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.113 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:51.113 07:36:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.113 07:36:16 -- nvmf/common.sh@421 -- # return 0 00:08:51.113 07:36:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:51.113 07:36:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.113 07:36:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:51.113 07:36:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:51.113 07:36:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.113 07:36:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:51.113 07:36:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:51.373 07:36:16 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:51.373 07:36:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:51.373 07:36:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.373 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:51.373 07:36:16 -- nvmf/common.sh@469 -- # nvmfpid=63340 00:08:51.373 07:36:16 -- nvmf/common.sh@470 -- # waitforlisten 63340 00:08:51.373 07:36:16 -- common/autotest_common.sh@829 -- # '[' -z 63340 ']' 00:08:51.373 07:36:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.373 07:36:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.373 07:36:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.373 07:36:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.373 07:36:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.373 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:08:51.373 [2024-12-02 07:36:16.805318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.373 [2024-12-02 07:36:16.805402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.373 [2024-12-02 07:36:16.944071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.373 [2024-12-02 07:36:16.993883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.373 [2024-12-02 07:36:16.994009] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.373 [2024-12-02 07:36:16.994020] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.373 [2024-12-02 07:36:16.994028] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.373 [2024-12-02 07:36:16.994216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.373 [2024-12-02 07:36:16.994556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.373 [2024-12-02 07:36:16.994703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.373 [2024-12-02 07:36:16.994806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.309 07:36:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.309 07:36:17 -- common/autotest_common.sh@862 -- # return 0 00:08:52.309 07:36:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:52.309 07:36:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.309 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:08:52.309 07:36:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.309 07:36:17 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:52.567 [2024-12-02 07:36:18.010680] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.567 07:36:18 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.826 07:36:18 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:52.826 07:36:18 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.085 07:36:18 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:53.085 07:36:18 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.364 07:36:18 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:53.364 07:36:18 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.627 07:36:19 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:53.627 07:36:19 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:53.886 07:36:19 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.145 07:36:19 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:54.145 07:36:19 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.145 07:36:19 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:54.145 07:36:19 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.404 07:36:19 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:54.404 07:36:19 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:54.662 07:36:20 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:54.920 07:36:20 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:54.920 07:36:20 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.178 07:36:20 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:55.178 07:36:20 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:55.436 07:36:20 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.694 [2024-12-02 07:36:21.184077] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.694 07:36:21 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:55.952 07:36:21 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:56.210 07:36:21 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.210 07:36:21 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:56.210 07:36:21 -- common/autotest_common.sh@1187 -- # local i=0 00:08:56.211 07:36:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.211 07:36:21 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:08:56.211 07:36:21 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:08:56.211 07:36:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:58.740 07:36:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:58.740 07:36:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:58.740 07:36:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.740 07:36:23 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:08:58.740 07:36:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.740 07:36:23 -- common/autotest_common.sh@1197 -- # return 0 00:08:58.740 07:36:23 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:58.740 [global] 00:08:58.740 thread=1 00:08:58.740 invalidate=1 00:08:58.740 rw=write 00:08:58.740 time_based=1 00:08:58.740 runtime=1 00:08:58.740 ioengine=libaio 00:08:58.740 direct=1 00:08:58.740 bs=4096 00:08:58.740 iodepth=1 00:08:58.740 norandommap=0 00:08:58.740 numjobs=1 00:08:58.740 00:08:58.740 verify_dump=1 00:08:58.740 verify_backlog=512 00:08:58.740 verify_state_save=0 00:08:58.740 do_verify=1 00:08:58.740 verify=crc32c-intel 00:08:58.740 [job0] 00:08:58.740 filename=/dev/nvme0n1 00:08:58.740 [job1] 00:08:58.740 filename=/dev/nvme0n2 00:08:58.740 [job2] 00:08:58.740 filename=/dev/nvme0n3 00:08:58.740 [job3] 00:08:58.740 filename=/dev/nvme0n4 00:08:58.740 Could not set queue depth (nvme0n1) 00:08:58.740 Could not set queue depth (nvme0n2) 00:08:58.740 Could not set queue depth (nvme0n3) 00:08:58.740 Could not set queue depth (nvme0n4) 00:08:58.740 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.740 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.740 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.740 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.740 fio-3.35 00:08:58.740 Starting 4 threads 00:08:59.698 00:08:59.698 job0: (groupid=0, jobs=1): err= 0: pid=63525: Mon Dec 2 07:36:25 2024 00:08:59.698 read: IOPS=2818, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:08:59.698 slat (nsec): min=11550, max=64595, avg=14627.88, stdev=4463.52 00:08:59.698 clat (usec): min=129, max=1065, avg=172.67, stdev=25.56 00:08:59.698 lat (usec): min=141, max=1078, avg=187.30, stdev=25.92 00:08:59.698 clat percentiles (usec): 00:08:59.698 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:08:59.698 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:08:59.698 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:08:59.698 | 99.00th=[ 225], 99.50th=[ 229], 99.90th=[ 255], 99.95th=[ 297], 00:08:59.698 | 99.99th=[ 1074] 00:08:59.698 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:59.698 slat (nsec): min=14735, max=69847, avg=22209.66, stdev=6097.01 00:08:59.698 clat (usec): min=93, max=197, avg=127.81, stdev=16.93 00:08:59.698 lat (usec): min=112, max=219, avg=150.02, stdev=18.01 00:08:59.698 clat percentiles (usec): 00:08:59.698 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:08:59.698 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 125], 60.00th=[ 129], 00:08:59.698 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 161], 00:08:59.698 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 190], 00:08:59.698 | 99.99th=[ 198] 00:08:59.698 bw ( KiB/s): min=12288, max=12288, per=30.73%, avg=12288.00, stdev= 0.00, samples=1 00:08:59.698 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:59.698 lat (usec) : 100=0.53%, 250=99.41%, 500=0.05% 00:08:59.698 lat (msec) : 2=0.02% 00:08:59.698 cpu : usr=3.20%, sys=8.00%, ctx=5893, majf=0, minf=3 00:08:59.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.699 issued rwts: total=2821,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.699 job1: (groupid=0, jobs=1): err= 0: pid=63526: Mon Dec 2 07:36:25 2024 00:08:59.699 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:08:59.699 slat (nsec): min=10963, max=58421, avg=14002.96, stdev=3981.70 00:08:59.699 clat (usec): min=129, max=2105, avg=170.47, stdev=41.23 00:08:59.699 lat (usec): min=142, max=2120, avg=184.47, stdev=41.51 00:08:59.699 clat percentiles (usec): 00:08:59.699 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:08:59.699 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:08:59.699 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 206], 00:08:59.699 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 247], 99.95th=[ 330], 00:08:59.699 | 99.99th=[ 2114] 00:08:59.699 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:59.699 slat (usec): min=13, max=105, avg=21.59, stdev= 6.77 00:08:59.699 clat (usec): min=90, max=204, avg=130.08, stdev=18.19 00:08:59.699 lat (usec): min=108, max=291, avg=151.67, stdev=19.25 00:08:59.699 clat percentiles (usec): 00:08:59.699 | 1.00th=[ 98], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 116], 00:08:59.699 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 133], 00:08:59.699 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 157], 95.00th=[ 165], 00:08:59.699 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 202], 99.95th=[ 204], 00:08:59.699 | 99.99th=[ 204] 00:08:59.699 bw ( KiB/s): min=12288, max=12288, per=30.73%, avg=12288.00, stdev= 0.00, samples=1 00:08:59.699 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:59.699 lat (usec) : 100=0.88%, 250=99.09%, 500=0.02% 00:08:59.699 lat (msec) : 4=0.02% 00:08:59.700 cpu : usr=1.90%, sys=8.70%, ctx=5916, majf=0, minf=11 00:08:59.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.700 issued rwts: total=2843,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.700 job2: (groupid=0, jobs=1): err= 0: pid=63527: Mon Dec 2 07:36:25 2024 00:08:59.700 read: IOPS=1561, BW=6246KiB/s (6396kB/s)(6252KiB/1001msec) 00:08:59.700 slat (nsec): min=14614, max=53520, avg=19119.39, stdev=4342.12 00:08:59.700 clat (usec): min=163, max=744, avg=288.54, stdev=41.92 00:08:59.700 lat (usec): min=180, max=786, avg=307.66, stdev=43.19 00:08:59.700 clat percentiles (usec): 00:08:59.700 | 1.00th=[ 221], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 262], 00:08:59.700 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:08:59.700 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:08:59.700 | 99.00th=[ 490], 99.50th=[ 553], 99.90th=[ 611], 99.95th=[ 742], 00:08:59.700 | 99.99th=[ 742] 00:08:59.700 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:59.700 slat (usec): min=22, max=118, avg=29.56, stdev= 7.01 00:08:59.700 clat (usec): min=110, max=311, avg=220.40, stdev=34.23 00:08:59.700 lat (usec): min=135, max=340, avg=249.96, stdev=35.48 00:08:59.700 clat percentiles (usec): 00:08:59.700 | 1.00th=[ 124], 5.00th=[ 155], 10.00th=[ 172], 20.00th=[ 198], 00:08:59.700 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 233], 00:08:59.700 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 269], 00:08:59.700 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 297], 99.95th=[ 306], 00:08:59.700 | 99.99th=[ 314] 00:08:59.700 bw ( KiB/s): min= 8192, max= 8192, per=20.49%, avg=8192.00, stdev= 0.00, samples=2 00:08:59.700 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:08:59.700 lat (usec) : 250=49.38%, 500=50.24%, 750=0.39% 00:08:59.700 cpu : usr=2.00%, sys=7.10%, ctx=3612, majf=0, minf=11 00:08:59.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.700 issued rwts: total=1563,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.700 job3: (groupid=0, jobs=1): err= 0: pid=63528: Mon Dec 2 07:36:25 2024 00:08:59.700 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:59.700 slat (nsec): min=14669, max=72281, avg=21390.03, stdev=8201.58 00:08:59.700 clat (usec): min=166, max=3108, avg=307.03, stdev=90.73 00:08:59.700 lat (usec): min=196, max=3125, avg=328.42, stdev=93.61 00:08:59.700 clat percentiles (usec): 00:08:59.700 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 265], 00:08:59.700 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:08:59.700 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 392], 95.00th=[ 445], 00:08:59.700 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 676], 99.95th=[ 3097], 00:08:59.700 | 99.99th=[ 3097] 00:08:59.700 write: IOPS=1813, BW=7253KiB/s (7427kB/s)(7260KiB/1001msec); 0 zone resets 00:08:59.700 slat (nsec): min=19120, max=98069, avg=31910.51, stdev=7959.54 00:08:59.700 clat (usec): min=104, max=479, avg=236.60, stdev=41.71 00:08:59.700 lat (usec): min=129, max=538, avg=268.51, stdev=44.48 00:08:59.700 clat percentiles (usec): 00:08:59.700 | 1.00th=[ 129], 5.00th=[ 190], 10.00th=[ 202], 20.00th=[ 210], 00:08:59.700 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:08:59.700 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:08:59.700 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 474], 99.95th=[ 482], 00:08:59.700 | 99.99th=[ 482] 00:08:59.700 bw ( KiB/s): min= 8192, max= 8192, per=20.49%, avg=8192.00, stdev= 0.00, samples=1 00:08:59.700 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:59.700 lat (usec) : 250=41.42%, 500=58.37%, 750=0.18% 00:08:59.700 lat (msec) : 4=0.03% 00:08:59.700 cpu : usr=2.30%, sys=6.70%, ctx=3351, majf=0, minf=13 00:08:59.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.700 issued rwts: total=1536,1815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.700 00:08:59.700 Run status group 0 (all jobs): 00:08:59.700 READ: bw=34.2MiB/s (35.9MB/s), 6138KiB/s-11.1MiB/s (6285kB/s-11.6MB/s), io=34.2MiB (35.9MB), run=1001-1001msec 00:08:59.700 WRITE: bw=39.1MiB/s (40.9MB/s), 7253KiB/s-12.0MiB/s (7427kB/s-12.6MB/s), io=39.1MiB (41.0MB), run=1001-1001msec 00:08:59.700 00:08:59.700 Disk stats (read/write): 00:08:59.700 nvme0n1: ios=2565/2560, merge=0/0, ticks=467/339, in_queue=806, util=88.68% 00:08:59.700 nvme0n2: ios=2577/2560, merge=0/0, ticks=467/360, in_queue=827, util=89.47% 00:08:59.700 nvme0n3: ios=1536/1537, merge=0/0, ticks=440/358, in_queue=798, util=89.27% 00:08:59.700 nvme0n4: ios=1388/1536, merge=0/0, ticks=414/384, in_queue=798, util=89.72% 00:08:59.700 07:36:25 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:59.700 [global] 00:08:59.700 thread=1 00:08:59.700 invalidate=1 00:08:59.700 rw=randwrite 00:08:59.700 time_based=1 00:08:59.700 runtime=1 00:08:59.700 ioengine=libaio 00:08:59.700 direct=1 00:08:59.700 bs=4096 00:08:59.700 iodepth=1 00:08:59.700 norandommap=0 00:08:59.700 numjobs=1 00:08:59.700 00:08:59.700 verify_dump=1 00:08:59.700 verify_backlog=512 00:08:59.700 verify_state_save=0 00:08:59.700 do_verify=1 00:08:59.700 verify=crc32c-intel 00:08:59.700 [job0] 00:08:59.700 filename=/dev/nvme0n1 00:08:59.700 [job1] 00:08:59.700 filename=/dev/nvme0n2 00:08:59.700 [job2] 00:08:59.700 filename=/dev/nvme0n3 00:08:59.700 [job3] 00:08:59.700 filename=/dev/nvme0n4 00:08:59.700 Could not set queue depth (nvme0n1) 00:08:59.700 Could not set queue depth (nvme0n2) 00:08:59.700 Could not set queue depth (nvme0n3) 00:08:59.700 Could not set queue depth (nvme0n4) 00:08:59.957 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.957 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.957 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.957 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:59.957 fio-3.35 00:08:59.957 Starting 4 threads 00:09:01.330 00:09:01.330 job0: (groupid=0, jobs=1): err= 0: pid=63587: Mon Dec 2 07:36:26 2024 00:09:01.330 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:01.331 slat (nsec): min=10804, max=29742, avg=12172.92, stdev=1750.36 00:09:01.331 clat (usec): min=129, max=1783, avg=154.31, stdev=32.08 00:09:01.331 lat (usec): min=141, max=1795, avg=166.49, stdev=32.19 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:09:01.331 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:09:01.331 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:09:01.331 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 223], 99.95th=[ 424], 00:09:01.331 | 99.99th=[ 1778] 00:09:01.331 write: IOPS=3414, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1001msec); 0 zone resets 00:09:01.331 slat (nsec): min=12967, max=90875, avg=19072.71, stdev=4891.60 00:09:01.331 clat (usec): min=88, max=1637, avg=121.05, stdev=34.45 00:09:01.331 lat (usec): min=105, max=1655, avg=140.12, stdev=35.02 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 93], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 110], 00:09:01.331 | 30.00th=[ 113], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 123], 00:09:01.331 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 145], 00:09:01.331 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 196], 99.95th=[ 1037], 00:09:01.331 | 99.99th=[ 1631] 00:09:01.331 bw ( KiB/s): min=13552, max=13552, per=31.52%, avg=13552.00, stdev= 0.00, samples=1 00:09:01.331 iops : min= 3388, max= 3388, avg=3388.00, stdev= 0.00, samples=1 00:09:01.331 lat (usec) : 100=2.70%, 250=97.21%, 500=0.03%, 750=0.02% 00:09:01.331 lat (msec) : 2=0.05% 00:09:01.331 cpu : usr=2.10%, sys=8.20%, ctx=6492, majf=0, minf=11 00:09:01.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 issued rwts: total=3072,3418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.331 job1: (groupid=0, jobs=1): err= 0: pid=63588: Mon Dec 2 07:36:26 2024 00:09:01.331 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:01.331 slat (nsec): min=10590, max=47074, avg=13343.42, stdev=2611.01 00:09:01.331 clat (usec): min=126, max=505, avg=158.47, stdev=17.35 00:09:01.331 lat (usec): min=138, max=520, avg=171.81, stdev=18.20 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 147], 00:09:01.331 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:09:01.331 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 184], 00:09:01.331 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 338], 99.95th=[ 383], 00:09:01.331 | 99.99th=[ 506] 00:09:01.331 write: IOPS=3242, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:09:01.331 slat (nsec): min=13552, max=88239, avg=20366.76, stdev=4738.20 00:09:01.331 clat (usec): min=86, max=451, avg=121.90, stdev=19.04 00:09:01.331 lat (usec): min=104, max=469, avg=142.26, stdev=19.90 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 98], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:09:01.331 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 123], 00:09:01.331 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 145], 00:09:01.331 | 99.00th=[ 167], 99.50th=[ 241], 99.90th=[ 379], 99.95th=[ 433], 00:09:01.331 | 99.99th=[ 453] 00:09:01.331 bw ( KiB/s): min=12360, max=12360, per=28.75%, avg=12360.00, stdev= 0.00, samples=1 00:09:01.331 iops : min= 3090, max= 3090, avg=3090.00, stdev= 0.00, samples=1 00:09:01.331 lat (usec) : 100=1.01%, 250=98.62%, 500=0.35%, 750=0.02% 00:09:01.331 cpu : usr=2.40%, sys=8.50%, ctx=6318, majf=0, minf=8 00:09:01.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 issued rwts: total=3072,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.331 job2: (groupid=0, jobs=1): err= 0: pid=63589: Mon Dec 2 07:36:26 2024 00:09:01.331 read: IOPS=1848, BW=7393KiB/s (7570kB/s)(7400KiB/1001msec) 00:09:01.331 slat (nsec): min=11954, max=94329, avg=14856.03, stdev=3509.89 00:09:01.331 clat (usec): min=185, max=694, avg=265.79, stdev=24.59 00:09:01.331 lat (usec): min=200, max=712, avg=280.64, stdev=25.55 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:09:01.331 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 265], 00:09:01.331 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:09:01.331 | 99.00th=[ 359], 99.50th=[ 396], 99.90th=[ 478], 99.95th=[ 693], 00:09:01.331 | 99.99th=[ 693] 00:09:01.331 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:01.331 slat (usec): min=17, max=287, avg=22.55, stdev= 8.11 00:09:01.331 clat (usec): min=109, max=2830, avg=208.93, stdev=63.69 00:09:01.331 lat (usec): min=130, max=2862, avg=231.48, stdev=65.08 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 128], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:09:01.331 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:09:01.331 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 247], 00:09:01.331 | 99.00th=[ 306], 99.50th=[ 367], 99.90th=[ 396], 99.95th=[ 441], 00:09:01.331 | 99.99th=[ 2835] 00:09:01.331 bw ( KiB/s): min= 8192, max= 8192, per=19.05%, avg=8192.00, stdev= 0.00, samples=1 00:09:01.331 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:01.331 lat (usec) : 250=58.75%, 500=41.20%, 750=0.03% 00:09:01.331 lat (msec) : 4=0.03% 00:09:01.331 cpu : usr=1.60%, sys=5.70%, ctx=3911, majf=0, minf=11 00:09:01.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 issued rwts: total=1850,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.331 job3: (groupid=0, jobs=1): err= 0: pid=63590: Mon Dec 2 07:36:26 2024 00:09:01.331 read: IOPS=1860, BW=7441KiB/s (7619kB/s)(7448KiB/1001msec) 00:09:01.331 slat (nsec): min=10757, max=62422, avg=14033.61, stdev=3153.58 00:09:01.331 clat (usec): min=167, max=537, avg=268.08, stdev=30.22 00:09:01.331 lat (usec): min=184, max=567, avg=282.11, stdev=31.17 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 251], 00:09:01.331 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 265], 00:09:01.331 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 310], 00:09:01.331 | 99.00th=[ 379], 99.50th=[ 498], 99.90th=[ 537], 99.95th=[ 537], 00:09:01.331 | 99.99th=[ 537] 00:09:01.331 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:01.331 slat (nsec): min=14339, max=74930, avg=22661.45, stdev=5244.10 00:09:01.331 clat (usec): min=103, max=420, avg=205.78, stdev=22.36 00:09:01.331 lat (usec): min=128, max=439, avg=228.44, stdev=24.10 00:09:01.331 clat percentiles (usec): 00:09:01.331 | 1.00th=[ 135], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 192], 00:09:01.331 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:09:01.331 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:09:01.331 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 289], 00:09:01.331 | 99.99th=[ 420] 00:09:01.331 bw ( KiB/s): min= 8192, max= 8192, per=19.05%, avg=8192.00, stdev= 0.00, samples=1 00:09:01.331 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:01.331 lat (usec) : 250=58.36%, 500=41.41%, 750=0.23% 00:09:01.331 cpu : usr=1.60%, sys=5.90%, ctx=3910, majf=0, minf=15 00:09:01.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.331 issued rwts: total=1862,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.331 00:09:01.331 Run status group 0 (all jobs): 00:09:01.331 READ: bw=38.5MiB/s (40.3MB/s), 7393KiB/s-12.0MiB/s (7570kB/s-12.6MB/s), io=38.5MiB (40.4MB), run=1001-1001msec 00:09:01.331 WRITE: bw=42.0MiB/s (44.0MB/s), 8184KiB/s-13.3MiB/s (8380kB/s-14.0MB/s), io=42.0MiB (44.1MB), run=1001-1001msec 00:09:01.331 00:09:01.331 Disk stats (read/write): 00:09:01.331 nvme0n1: ios=2610/2942, merge=0/0, ticks=445/378, in_queue=823, util=87.78% 00:09:01.331 nvme0n2: ios=2609/2787, merge=0/0, ticks=465/364, in_queue=829, util=88.04% 00:09:01.331 nvme0n3: ios=1536/1766, merge=0/0, ticks=415/382, in_queue=797, util=89.10% 00:09:01.331 nvme0n4: ios=1536/1795, merge=0/0, ticks=416/389, in_queue=805, util=89.67% 00:09:01.331 07:36:26 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:01.331 [global] 00:09:01.331 thread=1 00:09:01.331 invalidate=1 00:09:01.331 rw=write 00:09:01.331 time_based=1 00:09:01.331 runtime=1 00:09:01.331 ioengine=libaio 00:09:01.331 direct=1 00:09:01.331 bs=4096 00:09:01.331 iodepth=128 00:09:01.331 norandommap=0 00:09:01.331 numjobs=1 00:09:01.331 00:09:01.331 verify_dump=1 00:09:01.331 verify_backlog=512 00:09:01.331 verify_state_save=0 00:09:01.331 do_verify=1 00:09:01.331 verify=crc32c-intel 00:09:01.331 [job0] 00:09:01.331 filename=/dev/nvme0n1 00:09:01.331 [job1] 00:09:01.331 filename=/dev/nvme0n2 00:09:01.331 [job2] 00:09:01.331 filename=/dev/nvme0n3 00:09:01.331 [job3] 00:09:01.331 filename=/dev/nvme0n4 00:09:01.331 Could not set queue depth (nvme0n1) 00:09:01.331 Could not set queue depth (nvme0n2) 00:09:01.331 Could not set queue depth (nvme0n3) 00:09:01.331 Could not set queue depth (nvme0n4) 00:09:01.331 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.331 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.331 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.331 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.331 fio-3.35 00:09:01.332 Starting 4 threads 00:09:02.268 00:09:02.268 job0: (groupid=0, jobs=1): err= 0: pid=63643: Mon Dec 2 07:36:27 2024 00:09:02.268 read: IOPS=2712, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1003msec) 00:09:02.268 slat (usec): min=4, max=6357, avg=169.01, stdev=862.03 00:09:02.268 clat (usec): min=257, max=24593, avg=21592.37, stdev=2579.28 00:09:02.268 lat (usec): min=4568, max=24608, avg=21761.38, stdev=2431.31 00:09:02.268 clat percentiles (usec): 00:09:02.268 | 1.00th=[ 5080], 5.00th=[17171], 10.00th=[21103], 20.00th=[21365], 00:09:02.268 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22414], 00:09:02.268 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:09:02.268 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:09:02.268 | 99.99th=[24511] 00:09:02.268 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:02.268 slat (usec): min=11, max=5532, avg=169.66, stdev=827.93 00:09:02.268 clat (usec): min=15995, max=23928, avg=21949.19, stdev=1056.86 00:09:02.268 lat (usec): min=17044, max=23950, avg=22118.85, stdev=667.86 00:09:02.268 clat percentiles (usec): 00:09:02.268 | 1.00th=[17171], 5.00th=[20841], 10.00th=[21103], 20.00th=[21365], 00:09:02.268 | 30.00th=[21890], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152], 00:09:02.268 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:09:02.268 | 99.00th=[23725], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:09:02.268 | 99.99th=[23987] 00:09:02.268 bw ( KiB/s): min=12288, max=12288, per=17.32%, avg=12288.00, stdev= 0.00, samples=2 00:09:02.268 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:02.268 lat (usec) : 500=0.02% 00:09:02.268 lat (msec) : 10=0.55%, 20=4.18%, 50=95.25% 00:09:02.268 cpu : usr=2.40%, sys=8.08%, ctx=182, majf=0, minf=17 00:09:02.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:02.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.268 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.268 job1: (groupid=0, jobs=1): err= 0: pid=63648: Mon Dec 2 07:36:27 2024 00:09:02.268 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:09:02.268 slat (usec): min=4, max=4161, avg=78.30, stdev=375.65 00:09:02.268 clat (usec): min=7352, max=14527, avg=10352.19, stdev=907.73 00:09:02.268 lat (usec): min=7600, max=16317, avg=10430.49, stdev=937.16 00:09:02.268 clat percentiles (usec): 00:09:02.268 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:09:02.268 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:09:02.268 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11863], 00:09:02.268 | 99.00th=[13042], 99.50th=[13435], 99.90th=[13698], 99.95th=[14091], 00:09:02.268 | 99.99th=[14484] 00:09:02.268 write: IOPS=6283, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1002msec); 0 zone resets 00:09:02.268 slat (usec): min=10, max=4218, avg=75.40, stdev=431.78 00:09:02.268 clat (usec): min=1384, max=14552, avg=10030.14, stdev=1022.38 00:09:02.268 lat (usec): min=1403, max=14596, avg=10105.54, stdev=1098.64 00:09:02.268 clat percentiles (usec): 00:09:02.268 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[ 9634], 00:09:02.268 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:09:02.268 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[10945], 00:09:02.268 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14484], 99.95th=[14484], 00:09:02.268 | 99.99th=[14615] 00:09:02.268 bw ( KiB/s): min=24625, max=24776, per=34.82%, avg=24700.50, stdev=106.77, samples=2 00:09:02.268 iops : min= 6156, max= 6194, avg=6175.00, stdev=26.87, samples=2 00:09:02.268 lat (msec) : 2=0.13%, 10=39.13%, 20=60.74% 00:09:02.268 cpu : usr=5.09%, sys=15.18%, ctx=390, majf=0, minf=10 00:09:02.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:02.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.268 issued rwts: total=6144,6296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.268 job2: (groupid=0, jobs=1): err= 0: pid=63649: Mon Dec 2 07:36:27 2024 00:09:02.268 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:09:02.268 slat (usec): min=8, max=3695, avg=90.86, stdev=416.16 00:09:02.268 clat (usec): min=9088, max=15255, avg=12015.21, stdev=1299.41 00:09:02.268 lat (usec): min=9110, max=16905, avg=12106.06, stdev=1298.36 00:09:02.268 clat percentiles (usec): 00:09:02.268 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:09:02.268 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:09:02.268 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:09:02.268 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15270], 99.95th=[15270], 00:09:02.268 | 99.99th=[15270] 00:09:02.268 write: IOPS=5374, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1002msec); 0 zone resets 00:09:02.268 slat (usec): min=9, max=3404, avg=91.74, stdev=403.15 00:09:02.268 clat (usec): min=257, max=15980, avg=12098.22, stdev=1247.11 00:09:02.268 lat (usec): min=3389, max=16013, avg=12189.97, stdev=1297.00 00:09:02.268 clat percentiles (usec): 00:09:02.268 | 1.00th=[ 7635], 5.00th=[10814], 10.00th=[11338], 20.00th=[11469], 00:09:02.268 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:09:02.268 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13435], 95.00th=[13829], 00:09:02.268 | 99.00th=[15270], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:09:02.268 | 99.99th=[15926] 00:09:02.268 bw ( KiB/s): min=20480, max=21576, per=29.64%, avg=21028.00, stdev=774.99, samples=2 00:09:02.268 iops : min= 5120, max= 5394, avg=5257.00, stdev=193.75, samples=2 00:09:02.268 lat (usec) : 500=0.01% 00:09:02.268 lat (msec) : 4=0.26%, 10=3.17%, 20=96.56% 00:09:02.268 cpu : usr=4.70%, sys=14.89%, ctx=507, majf=0, minf=12 00:09:02.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:02.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.268 issued rwts: total=5120,5385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.269 job3: (groupid=0, jobs=1): err= 0: pid=63650: Mon Dec 2 07:36:27 2024 00:09:02.269 read: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec) 00:09:02.269 slat (usec): min=4, max=7696, avg=169.34, stdev=863.04 00:09:02.269 clat (usec): min=548, max=25828, avg=21649.16, stdev=2684.45 00:09:02.269 lat (usec): min=4181, max=25840, avg=21818.50, stdev=2544.61 00:09:02.269 clat percentiles (usec): 00:09:02.269 | 1.00th=[ 4752], 5.00th=[17171], 10.00th=[21103], 20.00th=[21365], 00:09:02.269 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22152], 00:09:02.269 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:09:02.269 | 99.00th=[25822], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:09:02.269 | 99.99th=[25822] 00:09:02.269 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:02.269 slat (usec): min=10, max=5483, avg=169.79, stdev=828.69 00:09:02.269 clat (usec): min=15949, max=23780, avg=21952.93, stdev=1054.14 00:09:02.269 lat (usec): min=17875, max=23805, avg=22122.72, stdev=662.54 00:09:02.269 clat percentiles (usec): 00:09:02.269 | 1.00th=[17171], 5.00th=[20579], 10.00th=[21103], 20.00th=[21627], 00:09:02.269 | 30.00th=[21890], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152], 00:09:02.269 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:09:02.269 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:09:02.269 | 99.99th=[23725] 00:09:02.269 bw ( KiB/s): min=12288, max=12288, per=17.32%, avg=12288.00, stdev= 0.00, samples=2 00:09:02.269 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:02.269 lat (usec) : 750=0.02% 00:09:02.269 lat (msec) : 10=0.57%, 20=4.11%, 50=95.30% 00:09:02.269 cpu : usr=3.09%, sys=7.37%, ctx=182, majf=0, minf=11 00:09:02.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:02.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.269 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.269 00:09:02.269 Run status group 0 (all jobs): 00:09:02.269 READ: bw=64.9MiB/s (68.1MB/s), 10.6MiB/s-24.0MiB/s (11.1MB/s-25.1MB/s), io=65.3MiB (68.4MB), run=1002-1005msec 00:09:02.269 WRITE: bw=69.3MiB/s (72.6MB/s), 11.9MiB/s-24.5MiB/s (12.5MB/s-25.7MB/s), io=69.6MiB (73.0MB), run=1002-1005msec 00:09:02.269 00:09:02.269 Disk stats (read/write): 00:09:02.269 nvme0n1: ios=2482/2560, merge=0/0, ticks=11774/11668, in_queue=23442, util=88.88% 00:09:02.269 nvme0n2: ios=5162/5632, merge=0/0, ticks=25198/23467, in_queue=48665, util=89.64% 00:09:02.269 nvme0n3: ios=4463/4608, merge=0/0, ticks=16835/15930, in_queue=32765, util=89.38% 00:09:02.269 nvme0n4: ios=2453/2560, merge=0/0, ticks=11873/12063, in_queue=23936, util=89.86% 00:09:02.269 07:36:27 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:02.528 [global] 00:09:02.528 thread=1 00:09:02.528 invalidate=1 00:09:02.528 rw=randwrite 00:09:02.528 time_based=1 00:09:02.528 runtime=1 00:09:02.528 ioengine=libaio 00:09:02.528 direct=1 00:09:02.528 bs=4096 00:09:02.528 iodepth=128 00:09:02.528 norandommap=0 00:09:02.528 numjobs=1 00:09:02.528 00:09:02.528 verify_dump=1 00:09:02.528 verify_backlog=512 00:09:02.528 verify_state_save=0 00:09:02.528 do_verify=1 00:09:02.528 verify=crc32c-intel 00:09:02.528 [job0] 00:09:02.528 filename=/dev/nvme0n1 00:09:02.528 [job1] 00:09:02.528 filename=/dev/nvme0n2 00:09:02.528 [job2] 00:09:02.528 filename=/dev/nvme0n3 00:09:02.528 [job3] 00:09:02.528 filename=/dev/nvme0n4 00:09:02.528 Could not set queue depth (nvme0n1) 00:09:02.528 Could not set queue depth (nvme0n2) 00:09:02.528 Could not set queue depth (nvme0n3) 00:09:02.528 Could not set queue depth (nvme0n4) 00:09:02.528 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.528 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.528 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.528 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:02.528 fio-3.35 00:09:02.528 Starting 4 threads 00:09:03.907 00:09:03.907 job0: (groupid=0, jobs=1): err= 0: pid=63706: Mon Dec 2 07:36:29 2024 00:09:03.907 read: IOPS=6073, BW=23.7MiB/s (24.9MB/s)(23.8MiB/1002msec) 00:09:03.907 slat (usec): min=7, max=3643, avg=79.26, stdev=333.24 00:09:03.907 clat (usec): min=232, max=14395, avg=10249.40, stdev=1205.83 00:09:03.907 lat (usec): min=2908, max=14416, avg=10328.66, stdev=1217.46 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[ 6915], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9503], 00:09:03.907 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:09:03.907 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:09:03.907 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14222], 99.95th=[14222], 00:09:03.907 | 99.99th=[14353] 00:09:03.907 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:09:03.907 slat (usec): min=10, max=2946, avg=77.35, stdev=337.11 00:09:03.907 clat (usec): min=7622, max=13930, avg=10463.61, stdev=938.30 00:09:03.907 lat (usec): min=7653, max=13963, avg=10540.96, stdev=990.53 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9765], 00:09:03.907 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:09:03.907 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11994], 95.00th=[12387], 00:09:03.907 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13829], 99.95th=[13829], 00:09:03.907 | 99.99th=[13960] 00:09:03.907 bw ( KiB/s): min=24576, max=24576, per=34.88%, avg=24576.00, stdev= 0.00, samples=2 00:09:03.907 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:09:03.907 lat (usec) : 250=0.01% 00:09:03.907 lat (msec) : 4=0.34%, 10=34.43%, 20=65.22% 00:09:03.907 cpu : usr=5.19%, sys=15.08%, ctx=546, majf=0, minf=10 00:09:03.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:03.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.907 issued rwts: total=6086,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.907 job1: (groupid=0, jobs=1): err= 0: pid=63707: Mon Dec 2 07:36:29 2024 00:09:03.907 read: IOPS=2729, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1002msec) 00:09:03.907 slat (usec): min=5, max=5645, avg=166.69, stdev=851.64 00:09:03.907 clat (usec): min=996, max=24544, avg=21463.11, stdev=2842.06 00:09:03.907 lat (usec): min=1014, max=24554, avg=21629.80, stdev=2714.23 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[ 5735], 5.00th=[16909], 10.00th=[21103], 20.00th=[21365], 00:09:03.907 | 30.00th=[21627], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:09:03.907 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23200], 95.00th=[23462], 00:09:03.907 | 99.00th=[24249], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:09:03.907 | 99.99th=[24511] 00:09:03.907 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:09:03.907 slat (usec): min=11, max=5746, avg=170.65, stdev=837.55 00:09:03.907 clat (usec): min=15917, max=23720, avg=21920.82, stdev=1026.27 00:09:03.907 lat (usec): min=17235, max=24563, avg=22091.47, stdev=618.40 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[16909], 5.00th=[21103], 10.00th=[21103], 20.00th=[21365], 00:09:03.907 | 30.00th=[21627], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152], 00:09:03.907 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:09:03.907 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:09:03.907 | 99.99th=[23725] 00:09:03.907 bw ( KiB/s): min=12288, max=12288, per=17.44%, avg=12288.00, stdev= 0.00, samples=1 00:09:03.907 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:03.907 lat (usec) : 1000=0.02% 00:09:03.907 lat (msec) : 2=0.24%, 10=0.55%, 20=4.17%, 50=95.02% 00:09:03.907 cpu : usr=2.20%, sys=7.29%, ctx=182, majf=0, minf=19 00:09:03.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:03.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.907 issued rwts: total=2735,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.907 job2: (groupid=0, jobs=1): err= 0: pid=63708: Mon Dec 2 07:36:29 2024 00:09:03.907 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:09:03.907 slat (usec): min=8, max=2851, avg=89.60, stdev=417.63 00:09:03.907 clat (usec): min=8766, max=13382, avg=12016.46, stdev=546.56 00:09:03.907 lat (usec): min=11142, max=13393, avg=12106.06, stdev=356.71 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[ 9503], 5.00th=[11469], 10.00th=[11600], 20.00th=[11731], 00:09:03.907 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:09:03.907 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:09:03.907 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:09:03.907 | 99.99th=[13435] 00:09:03.907 write: IOPS=5371, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1001msec); 0 zone resets 00:09:03.907 slat (usec): min=11, max=2795, avg=93.00, stdev=394.17 00:09:03.907 clat (usec): min=151, max=13914, avg=12073.99, stdev=1091.66 00:09:03.907 lat (usec): min=2422, max=13955, avg=12166.99, stdev=1018.55 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[ 6063], 5.00th=[11207], 10.00th=[11600], 20.00th=[11863], 00:09:03.907 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12125], 60.00th=[12387], 00:09:03.907 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12780], 95.00th=[13042], 00:09:03.907 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13960], 99.95th=[13960], 00:09:03.907 | 99.99th=[13960] 00:09:03.907 bw ( KiB/s): min=20744, max=20744, per=29.45%, avg=20744.00, stdev= 0.00, samples=1 00:09:03.907 iops : min= 5186, max= 5186, avg=5186.00, stdev= 0.00, samples=1 00:09:03.907 lat (usec) : 250=0.01% 00:09:03.907 lat (msec) : 4=0.30%, 10=2.72%, 20=96.96% 00:09:03.907 cpu : usr=5.00%, sys=14.20%, ctx=330, majf=0, minf=7 00:09:03.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:03.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.907 issued rwts: total=5120,5377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.907 job3: (groupid=0, jobs=1): err= 0: pid=63709: Mon Dec 2 07:36:29 2024 00:09:03.907 read: IOPS=2712, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1003msec) 00:09:03.907 slat (usec): min=5, max=5471, avg=167.85, stdev=849.71 00:09:03.907 clat (usec): min=1054, max=24292, avg=21515.16, stdev=2352.19 00:09:03.907 lat (usec): min=5946, max=24312, avg=21683.01, stdev=2189.28 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[ 6456], 5.00th=[17171], 10.00th=[21103], 20.00th=[21365], 00:09:03.907 | 30.00th=[21627], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:09:03.907 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23462], 00:09:03.907 | 99.00th=[23987], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249], 00:09:03.907 | 99.99th=[24249] 00:09:03.907 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:03.907 slat (usec): min=12, max=5426, avg=170.40, stdev=829.52 00:09:03.907 clat (usec): min=16285, max=23735, avg=22006.54, stdev=995.06 00:09:03.907 lat (usec): min=18411, max=24308, avg=22176.95, stdev=568.35 00:09:03.907 clat percentiles (usec): 00:09:03.907 | 1.00th=[17171], 5.00th=[21103], 10.00th=[21365], 20.00th=[21627], 00:09:03.907 | 30.00th=[21890], 40.00th=[21890], 50.00th=[22152], 60.00th=[22152], 00:09:03.907 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[23200], 00:09:03.907 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:09:03.907 | 99.99th=[23725] 00:09:03.907 bw ( KiB/s): min=12288, max=12312, per=17.46%, avg=12300.00, stdev=16.97, samples=2 00:09:03.907 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:03.907 lat (msec) : 2=0.02%, 10=0.55%, 20=4.18%, 50=95.25% 00:09:03.907 cpu : usr=2.89%, sys=8.48%, ctx=182, majf=0, minf=5 00:09:03.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:03.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.907 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.907 00:09:03.907 Run status group 0 (all jobs): 00:09:03.907 READ: bw=64.9MiB/s (68.0MB/s), 10.6MiB/s-23.7MiB/s (11.1MB/s-24.9MB/s), io=65.1MiB (68.2MB), run=1001-1003msec 00:09:03.907 WRITE: bw=68.8MiB/s (72.1MB/s), 12.0MiB/s-24.0MiB/s (12.5MB/s-25.1MB/s), io=69.0MiB (72.4MB), run=1001-1003msec 00:09:03.907 00:09:03.907 Disk stats (read/write): 00:09:03.907 nvme0n1: ios=5170/5428, merge=0/0, ticks=16603/16057, in_queue=32660, util=89.48% 00:09:03.907 nvme0n2: ios=2481/2560, merge=0/0, ticks=11022/10976, in_queue=21998, util=89.99% 00:09:03.907 nvme0n3: ios=4480/4608, merge=0/0, ticks=11958/12274, in_queue=24232, util=89.30% 00:09:03.908 nvme0n4: ios=2432/2560, merge=0/0, ticks=12418/12920, in_queue=25338, util=89.66% 00:09:03.908 07:36:29 -- target/fio.sh@55 -- # sync 00:09:03.908 07:36:29 -- target/fio.sh@59 -- # fio_pid=63722 00:09:03.908 07:36:29 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:03.908 07:36:29 -- target/fio.sh@61 -- # sleep 3 00:09:03.908 [global] 00:09:03.908 thread=1 00:09:03.908 invalidate=1 00:09:03.908 rw=read 00:09:03.908 time_based=1 00:09:03.908 runtime=10 00:09:03.908 ioengine=libaio 00:09:03.908 direct=1 00:09:03.908 bs=4096 00:09:03.908 iodepth=1 00:09:03.908 norandommap=1 00:09:03.908 numjobs=1 00:09:03.908 00:09:03.908 [job0] 00:09:03.908 filename=/dev/nvme0n1 00:09:03.908 [job1] 00:09:03.908 filename=/dev/nvme0n2 00:09:03.908 [job2] 00:09:03.908 filename=/dev/nvme0n3 00:09:03.908 [job3] 00:09:03.908 filename=/dev/nvme0n4 00:09:03.908 Could not set queue depth (nvme0n1) 00:09:03.908 Could not set queue depth (nvme0n2) 00:09:03.908 Could not set queue depth (nvme0n3) 00:09:03.908 Could not set queue depth (nvme0n4) 00:09:03.908 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.908 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.908 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.908 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.908 fio-3.35 00:09:03.908 Starting 4 threads 00:09:07.195 07:36:32 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:07.195 fio: pid=63771, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:07.195 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42221568, buflen=4096 00:09:07.195 07:36:32 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:07.195 fio: pid=63770, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:07.195 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=67919872, buflen=4096 00:09:07.453 07:36:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.453 07:36:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:07.453 fio: pid=63768, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:07.453 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5255168, buflen=4096 00:09:07.454 07:36:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.454 07:36:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:07.760 fio: pid=63769, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:07.760 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54685696, buflen=4096 00:09:07.760 07:36:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.760 07:36:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:07.760 00:09:07.760 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63768: Mon Dec 2 07:36:33 2024 00:09:07.760 read: IOPS=5163, BW=20.2MiB/s (21.1MB/s)(69.0MiB/3422msec) 00:09:07.760 slat (usec): min=7, max=14335, avg=15.59, stdev=150.07 00:09:07.760 clat (usec): min=2, max=7818, avg=176.73, stdev=112.83 00:09:07.760 lat (usec): min=133, max=14502, avg=192.32, stdev=188.50 00:09:07.760 clat percentiles (usec): 00:09:07.760 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:09:07.760 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 167], 00:09:07.760 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 215], 95.00th=[ 289], 00:09:07.760 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 898], 99.95th=[ 2704], 00:09:07.760 | 99.99th=[ 7177] 00:09:07.760 bw ( KiB/s): min=15416, max=24128, per=33.78%, avg=21346.67, stdev=3069.01, samples=6 00:09:07.760 iops : min= 3854, max= 6032, avg=5336.67, stdev=767.25, samples=6 00:09:07.760 lat (usec) : 4=0.01%, 100=0.01%, 250=93.06%, 500=6.73%, 750=0.06% 00:09:07.760 lat (usec) : 1000=0.05% 00:09:07.760 lat (msec) : 2=0.02%, 4=0.05%, 10=0.01% 00:09:07.760 cpu : usr=1.70%, sys=6.20%, ctx=17683, majf=0, minf=1 00:09:07.760 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.760 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.760 issued rwts: total=17668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.760 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.760 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63769: Mon Dec 2 07:36:33 2024 00:09:07.760 read: IOPS=3642, BW=14.2MiB/s (14.9MB/s)(52.2MiB/3666msec) 00:09:07.760 slat (usec): min=7, max=16459, avg=20.03, stdev=285.84 00:09:07.760 clat (usec): min=3, max=2809, avg=253.10, stdev=69.00 00:09:07.760 lat (usec): min=128, max=16671, avg=273.13, stdev=293.41 00:09:07.760 clat percentiles (usec): 00:09:07.760 | 1.00th=[ 130], 5.00th=[ 143], 10.00th=[ 172], 20.00th=[ 229], 00:09:07.760 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:09:07.760 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 326], 00:09:07.760 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 537], 99.95th=[ 783], 00:09:07.760 | 99.99th=[ 2802] 00:09:07.760 bw ( KiB/s): min=12368, max=17063, per=22.76%, avg=14383.86, stdev=1390.17, samples=7 00:09:07.760 iops : min= 3092, max= 4265, avg=3595.86, stdev=347.30, samples=7 00:09:07.760 lat (usec) : 4=0.02%, 10=0.01%, 50=0.01%, 250=40.20%, 500=59.63% 00:09:07.760 lat (usec) : 750=0.05%, 1000=0.02% 00:09:07.760 lat (msec) : 2=0.01%, 4=0.03% 00:09:07.760 cpu : usr=1.26%, sys=4.47%, ctx=13370, majf=0, minf=1 00:09:07.760 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.760 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.760 issued rwts: total=13352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.760 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.760 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63770: Mon Dec 2 07:36:33 2024 00:09:07.760 read: IOPS=5178, BW=20.2MiB/s (21.2MB/s)(64.8MiB/3202msec) 00:09:07.760 slat (usec): min=7, max=16294, avg=14.57, stdev=159.62 00:09:07.760 clat (usec): min=3, max=2740, avg=177.29, stdev=40.87 00:09:07.760 lat (usec): min=147, max=16608, avg=191.86, stdev=165.77 00:09:07.760 clat percentiles (usec): 00:09:07.760 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:09:07.760 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:09:07.760 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 221], 00:09:07.760 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 363], 99.95th=[ 594], 00:09:07.760 | 99.99th=[ 1909] 00:09:07.760 bw ( KiB/s): min=20240, max=21888, per=33.75%, avg=21326.67, stdev=561.45, samples=6 00:09:07.760 iops : min= 5060, max= 5472, avg=5331.67, stdev=140.36, samples=6 00:09:07.760 lat (usec) : 4=0.01%, 20=0.01%, 250=97.09%, 500=2.84%, 750=0.02% 00:09:07.760 lat (usec) : 1000=0.01% 00:09:07.760 lat (msec) : 2=0.02%, 4=0.01% 00:09:07.760 cpu : usr=1.12%, sys=6.37%, ctx=16593, majf=0, minf=2 00:09:07.760 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.760 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.760 issued rwts: total=16583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.760 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.760 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63771: Mon Dec 2 07:36:33 2024 00:09:07.760 read: IOPS=3493, BW=13.6MiB/s (14.3MB/s)(40.3MiB/2951msec) 00:09:07.760 slat (nsec): min=10725, max=70630, avg=14500.77, stdev=4393.02 00:09:07.760 clat (usec): min=145, max=2629, avg=270.12, stdev=47.38 00:09:07.760 lat (usec): min=157, max=2652, avg=284.62, stdev=48.27 00:09:07.760 clat percentiles (usec): 00:09:07.760 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:09:07.760 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:09:07.760 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 326], 00:09:07.760 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 437], 99.95th=[ 586], 00:09:07.760 | 99.99th=[ 2442] 00:09:07.761 bw ( KiB/s): min=14128, max=14648, per=22.56%, avg=14256.00, stdev=224.21, samples=5 00:09:07.761 iops : min= 3532, max= 3662, avg=3564.00, stdev=56.05, samples=5 00:09:07.761 lat (usec) : 250=27.97%, 500=71.96%, 750=0.02%, 1000=0.01% 00:09:07.761 lat (msec) : 2=0.02%, 4=0.02% 00:09:07.761 cpu : usr=0.85%, sys=4.81%, ctx=10310, majf=0, minf=2 00:09:07.761 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.761 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.761 issued rwts: total=10309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.761 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.761 00:09:07.761 Run status group 0 (all jobs): 00:09:07.761 READ: bw=61.7MiB/s (64.7MB/s), 13.6MiB/s-20.2MiB/s (14.3MB/s-21.2MB/s), io=226MiB (237MB), run=2951-3666msec 00:09:07.761 00:09:07.761 Disk stats (read/write): 00:09:07.761 nvme0n1: ios=17434/0, merge=0/0, ticks=3111/0, in_queue=3111, util=94.85% 00:09:07.761 nvme0n2: ios=13068/0, merge=0/0, ticks=3392/0, in_queue=3392, util=94.83% 00:09:07.761 nvme0n3: ios=16291/0, merge=0/0, ticks=2985/0, in_queue=2985, util=95.99% 00:09:07.761 nvme0n4: ios=10089/0, merge=0/0, ticks=2797/0, in_queue=2797, util=96.69% 00:09:08.064 07:36:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.064 07:36:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:08.336 07:36:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.336 07:36:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:08.594 07:36:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.594 07:36:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:08.594 07:36:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:08.594 07:36:34 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:08.852 07:36:34 -- target/fio.sh@69 -- # fio_status=0 00:09:08.852 07:36:34 -- target/fio.sh@70 -- # wait 63722 00:09:08.852 07:36:34 -- target/fio.sh@70 -- # fio_status=4 00:09:08.852 07:36:34 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.852 07:36:34 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.852 07:36:34 -- common/autotest_common.sh@1208 -- # local i=0 00:09:08.852 07:36:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:08.852 07:36:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.852 07:36:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:08.852 07:36:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.852 nvmf hotplug test: fio failed as expected 00:09:08.852 07:36:34 -- common/autotest_common.sh@1220 -- # return 0 00:09:08.852 07:36:34 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:08.852 07:36:34 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:08.852 07:36:34 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.109 07:36:34 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:09.109 07:36:34 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:09.109 07:36:34 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:09.109 07:36:34 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:09.109 07:36:34 -- target/fio.sh@91 -- # nvmftestfini 00:09:09.109 07:36:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:09.109 07:36:34 -- nvmf/common.sh@116 -- # sync 00:09:09.109 07:36:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:09.109 07:36:34 -- nvmf/common.sh@119 -- # set +e 00:09:09.109 07:36:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:09.109 07:36:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:09.109 rmmod nvme_tcp 00:09:09.109 rmmod nvme_fabrics 00:09:09.109 rmmod nvme_keyring 00:09:09.367 07:36:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:09.367 07:36:34 -- nvmf/common.sh@123 -- # set -e 00:09:09.367 07:36:34 -- nvmf/common.sh@124 -- # return 0 00:09:09.367 07:36:34 -- nvmf/common.sh@477 -- # '[' -n 63340 ']' 00:09:09.367 07:36:34 -- nvmf/common.sh@478 -- # killprocess 63340 00:09:09.368 07:36:34 -- common/autotest_common.sh@936 -- # '[' -z 63340 ']' 00:09:09.368 07:36:34 -- common/autotest_common.sh@940 -- # kill -0 63340 00:09:09.368 07:36:34 -- common/autotest_common.sh@941 -- # uname 00:09:09.368 07:36:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:09.368 07:36:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63340 00:09:09.368 killing process with pid 63340 00:09:09.368 07:36:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:09.368 07:36:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:09.368 07:36:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63340' 00:09:09.368 07:36:34 -- common/autotest_common.sh@955 -- # kill 63340 00:09:09.368 07:36:34 -- common/autotest_common.sh@960 -- # wait 63340 00:09:09.368 07:36:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:09.368 07:36:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:09.368 07:36:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:09.368 07:36:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.368 07:36:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:09.368 07:36:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.368 07:36:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.368 07:36:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.368 07:36:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:09.368 00:09:09.368 real 0m18.782s 00:09:09.368 user 1m9.777s 00:09:09.368 sys 0m10.578s 00:09:09.368 07:36:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.368 07:36:34 -- common/autotest_common.sh@10 -- # set +x 00:09:09.368 ************************************ 00:09:09.368 END TEST nvmf_fio_target 00:09:09.368 ************************************ 00:09:09.635 07:36:35 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.635 07:36:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:09.635 07:36:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.635 07:36:35 -- common/autotest_common.sh@10 -- # set +x 00:09:09.635 ************************************ 00:09:09.635 START TEST nvmf_bdevio 00:09:09.635 ************************************ 00:09:09.635 07:36:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.635 * Looking for test storage... 00:09:09.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.635 07:36:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:09.635 07:36:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:09.635 07:36:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:09.635 07:36:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:09.635 07:36:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:09.635 07:36:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:09.635 07:36:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:09.635 07:36:35 -- scripts/common.sh@335 -- # IFS=.-: 00:09:09.635 07:36:35 -- scripts/common.sh@335 -- # read -ra ver1 00:09:09.635 07:36:35 -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.635 07:36:35 -- scripts/common.sh@336 -- # read -ra ver2 00:09:09.635 07:36:35 -- scripts/common.sh@337 -- # local 'op=<' 00:09:09.635 07:36:35 -- scripts/common.sh@339 -- # ver1_l=2 00:09:09.635 07:36:35 -- scripts/common.sh@340 -- # ver2_l=1 00:09:09.635 07:36:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:09.635 07:36:35 -- scripts/common.sh@343 -- # case "$op" in 00:09:09.635 07:36:35 -- scripts/common.sh@344 -- # : 1 00:09:09.635 07:36:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:09.635 07:36:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.635 07:36:35 -- scripts/common.sh@364 -- # decimal 1 00:09:09.635 07:36:35 -- scripts/common.sh@352 -- # local d=1 00:09:09.635 07:36:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.635 07:36:35 -- scripts/common.sh@354 -- # echo 1 00:09:09.635 07:36:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:09.635 07:36:35 -- scripts/common.sh@365 -- # decimal 2 00:09:09.635 07:36:35 -- scripts/common.sh@352 -- # local d=2 00:09:09.635 07:36:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.635 07:36:35 -- scripts/common.sh@354 -- # echo 2 00:09:09.635 07:36:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:09.635 07:36:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:09.635 07:36:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:09.635 07:36:35 -- scripts/common.sh@367 -- # return 0 00:09:09.636 07:36:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.636 07:36:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:09.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.636 --rc genhtml_branch_coverage=1 00:09:09.636 --rc genhtml_function_coverage=1 00:09:09.636 --rc genhtml_legend=1 00:09:09.636 --rc geninfo_all_blocks=1 00:09:09.636 --rc geninfo_unexecuted_blocks=1 00:09:09.636 00:09:09.636 ' 00:09:09.636 07:36:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:09.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.636 --rc genhtml_branch_coverage=1 00:09:09.636 --rc genhtml_function_coverage=1 00:09:09.636 --rc genhtml_legend=1 00:09:09.636 --rc geninfo_all_blocks=1 00:09:09.636 --rc geninfo_unexecuted_blocks=1 00:09:09.636 00:09:09.636 ' 00:09:09.636 07:36:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:09.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.636 --rc genhtml_branch_coverage=1 00:09:09.636 --rc genhtml_function_coverage=1 00:09:09.636 --rc genhtml_legend=1 00:09:09.636 --rc geninfo_all_blocks=1 00:09:09.636 --rc geninfo_unexecuted_blocks=1 00:09:09.636 00:09:09.636 ' 00:09:09.636 07:36:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:09.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.636 --rc genhtml_branch_coverage=1 00:09:09.636 --rc genhtml_function_coverage=1 00:09:09.636 --rc genhtml_legend=1 00:09:09.636 --rc geninfo_all_blocks=1 00:09:09.636 --rc geninfo_unexecuted_blocks=1 00:09:09.636 00:09:09.636 ' 00:09:09.636 07:36:35 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.636 07:36:35 -- nvmf/common.sh@7 -- # uname -s 00:09:09.636 07:36:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.636 07:36:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.636 07:36:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.636 07:36:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.636 07:36:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.636 07:36:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.636 07:36:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.636 07:36:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.636 07:36:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.636 07:36:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.636 07:36:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:09:09.636 07:36:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:09:09.636 07:36:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.636 07:36:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.636 07:36:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.636 07:36:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.636 07:36:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.636 07:36:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.636 07:36:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.636 07:36:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.637 07:36:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.640 07:36:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.640 07:36:35 -- paths/export.sh@5 -- # export PATH 00:09:09.640 07:36:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.641 07:36:35 -- nvmf/common.sh@46 -- # : 0 00:09:09.641 07:36:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:09.641 07:36:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:09.641 07:36:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:09.641 07:36:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.641 07:36:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.641 07:36:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:09.641 07:36:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:09.641 07:36:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:09.641 07:36:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.641 07:36:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.641 07:36:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:09:09.641 07:36:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:09.641 07:36:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.641 07:36:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:09.641 07:36:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:09.641 07:36:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:09.641 07:36:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.641 07:36:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.641 07:36:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.641 07:36:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:09.641 07:36:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:09.641 07:36:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:09.641 07:36:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:09.645 07:36:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:09.645 07:36:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:09.645 07:36:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.645 07:36:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.645 07:36:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:09.646 07:36:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:09.646 07:36:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.646 07:36:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.646 07:36:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.646 07:36:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.646 07:36:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.646 07:36:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.646 07:36:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.646 07:36:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.646 07:36:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:09.905 07:36:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:09.905 Cannot find device "nvmf_tgt_br" 00:09:09.905 07:36:35 -- nvmf/common.sh@154 -- # true 00:09:09.905 07:36:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.905 Cannot find device "nvmf_tgt_br2" 00:09:09.905 07:36:35 -- nvmf/common.sh@155 -- # true 00:09:09.905 07:36:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:09.905 07:36:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:09.905 Cannot find device "nvmf_tgt_br" 00:09:09.905 07:36:35 -- nvmf/common.sh@157 -- # true 00:09:09.905 07:36:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:09.905 Cannot find device "nvmf_tgt_br2" 00:09:09.905 07:36:35 -- nvmf/common.sh@158 -- # true 00:09:09.905 07:36:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:09.905 07:36:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:09.905 07:36:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.905 07:36:35 -- nvmf/common.sh@161 -- # true 00:09:09.905 07:36:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.905 07:36:35 -- nvmf/common.sh@162 -- # true 00:09:09.905 07:36:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.905 07:36:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.905 07:36:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.905 07:36:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.905 07:36:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.905 07:36:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.905 07:36:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.905 07:36:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:09.905 07:36:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:09.905 07:36:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:09.905 07:36:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:09.905 07:36:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:09.905 07:36:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:09.905 07:36:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.905 07:36:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.905 07:36:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.905 07:36:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:09.905 07:36:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:09.905 07:36:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.905 07:36:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.905 07:36:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.905 07:36:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.163 07:36:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.163 07:36:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:10.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:09:10.163 00:09:10.163 --- 10.0.0.2 ping statistics --- 00:09:10.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.163 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:10.163 07:36:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:10.163 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.163 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:09:10.163 00:09:10.163 --- 10.0.0.3 ping statistics --- 00:09:10.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.163 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:10.163 07:36:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:10.163 00:09:10.163 --- 10.0.0.1 ping statistics --- 00:09:10.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.163 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:10.163 07:36:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.163 07:36:35 -- nvmf/common.sh@421 -- # return 0 00:09:10.163 07:36:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:10.163 07:36:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.163 07:36:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:10.163 07:36:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:10.163 07:36:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.163 07:36:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:10.163 07:36:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:10.163 07:36:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:10.163 07:36:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:10.163 07:36:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.163 07:36:35 -- common/autotest_common.sh@10 -- # set +x 00:09:10.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.163 07:36:35 -- nvmf/common.sh@469 -- # nvmfpid=64036 00:09:10.163 07:36:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:10.163 07:36:35 -- nvmf/common.sh@470 -- # waitforlisten 64036 00:09:10.163 07:36:35 -- common/autotest_common.sh@829 -- # '[' -z 64036 ']' 00:09:10.163 07:36:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.163 07:36:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.163 07:36:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.163 07:36:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.163 07:36:35 -- common/autotest_common.sh@10 -- # set +x 00:09:10.164 [2024-12-02 07:36:35.618075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:10.164 [2024-12-02 07:36:35.618368] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.164 [2024-12-02 07:36:35.751476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.422 [2024-12-02 07:36:35.801998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:10.422 [2024-12-02 07:36:35.802431] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.422 [2024-12-02 07:36:35.802562] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.422 [2024-12-02 07:36:35.802673] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.422 [2024-12-02 07:36:35.802863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:10.423 [2024-12-02 07:36:35.802999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:10.423 [2024-12-02 07:36:35.803067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.423 [2024-12-02 07:36:35.803068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:10.988 07:36:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.988 07:36:36 -- common/autotest_common.sh@862 -- # return 0 00:09:10.988 07:36:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:10.988 07:36:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.988 07:36:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 07:36:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.988 07:36:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.988 07:36:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.988 07:36:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 [2024-12-02 07:36:36.529911] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.988 07:36:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.988 07:36:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.988 07:36:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.988 07:36:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 Malloc0 00:09:10.988 07:36:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.988 07:36:36 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.988 07:36:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.988 07:36:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 07:36:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.988 07:36:36 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.988 07:36:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.988 07:36:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.988 07:36:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.988 07:36:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.988 07:36:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.989 07:36:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.989 [2024-12-02 07:36:36.587723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.989 07:36:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.989 07:36:36 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:10.989 07:36:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:10.989 07:36:36 -- nvmf/common.sh@520 -- # config=() 00:09:10.989 07:36:36 -- nvmf/common.sh@520 -- # local subsystem config 00:09:10.989 07:36:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:10.989 07:36:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:10.989 { 00:09:10.989 "params": { 00:09:10.989 "name": "Nvme$subsystem", 00:09:10.989 "trtype": "$TEST_TRANSPORT", 00:09:10.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.989 "adrfam": "ipv4", 00:09:10.989 "trsvcid": "$NVMF_PORT", 00:09:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.989 "hdgst": ${hdgst:-false}, 00:09:10.989 "ddgst": ${ddgst:-false} 00:09:10.989 }, 00:09:10.989 "method": "bdev_nvme_attach_controller" 00:09:10.989 } 00:09:10.989 EOF 00:09:10.989 )") 00:09:10.989 07:36:36 -- nvmf/common.sh@542 -- # cat 00:09:10.989 07:36:36 -- nvmf/common.sh@544 -- # jq . 00:09:10.989 07:36:36 -- nvmf/common.sh@545 -- # IFS=, 00:09:10.989 07:36:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:10.989 "params": { 00:09:10.989 "name": "Nvme1", 00:09:10.989 "trtype": "tcp", 00:09:10.989 "traddr": "10.0.0.2", 00:09:10.989 "adrfam": "ipv4", 00:09:10.989 "trsvcid": "4420", 00:09:10.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.989 "hdgst": false, 00:09:10.989 "ddgst": false 00:09:10.989 }, 00:09:10.989 "method": "bdev_nvme_attach_controller" 00:09:10.989 }' 00:09:11.247 [2024-12-02 07:36:36.643378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.247 [2024-12-02 07:36:36.643462] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64078 ] 00:09:11.247 [2024-12-02 07:36:36.785403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.247 [2024-12-02 07:36:36.853082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.247 [2024-12-02 07:36:36.853205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.247 [2024-12-02 07:36:36.853212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.505 [2024-12-02 07:36:36.994586] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:11.505 [2024-12-02 07:36:36.994831] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:11.505 I/O targets: 00:09:11.505 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:11.505 00:09:11.505 00:09:11.505 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.505 http://cunit.sourceforge.net/ 00:09:11.505 00:09:11.505 00:09:11.505 Suite: bdevio tests on: Nvme1n1 00:09:11.505 Test: blockdev write read block ...passed 00:09:11.505 Test: blockdev write zeroes read block ...passed 00:09:11.505 Test: blockdev write zeroes read no split ...passed 00:09:11.505 Test: blockdev write zeroes read split ...passed 00:09:11.505 Test: blockdev write zeroes read split partial ...passed 00:09:11.506 Test: blockdev reset ...[2024-12-02 07:36:37.026346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:11.506 [2024-12-02 07:36:37.026732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1987c80 (9): Bad file descriptor 00:09:11.506 [2024-12-02 07:36:37.043960] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:11.506 passed 00:09:11.506 Test: blockdev write read 8 blocks ...passed 00:09:11.506 Test: blockdev write read size > 128k ...passed 00:09:11.506 Test: blockdev write read invalid size ...passed 00:09:11.506 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:11.506 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:11.506 Test: blockdev write read max offset ...passed 00:09:11.506 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:11.506 Test: blockdev writev readv 8 blocks ...passed 00:09:11.506 Test: blockdev writev readv 30 x 1block ...passed 00:09:11.506 Test: blockdev writev readv block ...passed 00:09:11.506 Test: blockdev writev readv size > 128k ...passed 00:09:11.506 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:11.506 Test: blockdev comparev and writev ...[2024-12-02 07:36:37.055186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.055621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.055634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.055976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.055993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.056008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.056018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.056291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.056323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.056338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.056348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.056819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.056853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.056874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.506 [2024-12-02 07:36:37.056884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:11.506 passed 00:09:11.506 Test: blockdev nvme passthru rw ...passed 00:09:11.506 Test: blockdev nvme passthru vendor specific ...[2024-12-02 07:36:37.058002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.506 [2024-12-02 07:36:37.058146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.058271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.506 [2024-12-02 07:36:37.058288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:11.506 [2024-12-02 07:36:37.058437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.506 [2024-12-02 07:36:37.058454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:11.506 passed 00:09:11.506 Test: blockdev nvme admin passthru ...[2024-12-02 07:36:37.058865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.506 [2024-12-02 07:36:37.058898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:11.506 passed 00:09:11.506 Test: blockdev copy ...passed 00:09:11.506 00:09:11.506 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.506 suites 1 1 n/a 0 0 00:09:11.506 tests 23 23 23 0 0 00:09:11.506 asserts 152 152 152 0 n/a 00:09:11.506 00:09:11.506 Elapsed time = 0.157 seconds 00:09:11.764 07:36:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.764 07:36:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.764 07:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:11.764 07:36:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.764 07:36:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:11.764 07:36:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:09:11.764 07:36:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:11.764 07:36:37 -- nvmf/common.sh@116 -- # sync 00:09:11.764 07:36:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:11.764 07:36:37 -- nvmf/common.sh@119 -- # set +e 00:09:11.764 07:36:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:11.764 07:36:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:11.764 rmmod nvme_tcp 00:09:11.765 rmmod nvme_fabrics 00:09:11.765 rmmod nvme_keyring 00:09:11.765 07:36:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:11.765 07:36:37 -- nvmf/common.sh@123 -- # set -e 00:09:11.765 07:36:37 -- nvmf/common.sh@124 -- # return 0 00:09:11.765 07:36:37 -- nvmf/common.sh@477 -- # '[' -n 64036 ']' 00:09:11.765 07:36:37 -- nvmf/common.sh@478 -- # killprocess 64036 00:09:11.765 07:36:37 -- common/autotest_common.sh@936 -- # '[' -z 64036 ']' 00:09:11.765 07:36:37 -- common/autotest_common.sh@940 -- # kill -0 64036 00:09:11.765 07:36:37 -- common/autotest_common.sh@941 -- # uname 00:09:11.765 07:36:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.765 07:36:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64036 00:09:11.765 killing process with pid 64036 00:09:11.765 07:36:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:09:11.765 07:36:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:09:11.765 07:36:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64036' 00:09:11.765 07:36:37 -- common/autotest_common.sh@955 -- # kill 64036 00:09:11.765 07:36:37 -- common/autotest_common.sh@960 -- # wait 64036 00:09:12.024 07:36:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:12.024 07:36:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:12.024 07:36:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:12.024 07:36:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.024 07:36:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:12.024 07:36:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.024 07:36:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.024 07:36:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.024 07:36:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:12.024 00:09:12.024 real 0m2.556s 00:09:12.024 user 0m8.155s 00:09:12.024 sys 0m0.607s 00:09:12.024 07:36:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.024 07:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:12.024 ************************************ 00:09:12.024 END TEST nvmf_bdevio 00:09:12.024 ************************************ 00:09:12.024 07:36:37 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:09:12.024 07:36:37 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:09:12.024 07:36:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:12.024 07:36:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.024 07:36:37 -- common/autotest_common.sh@10 -- # set +x 00:09:12.024 ************************************ 00:09:12.024 START TEST nvmf_bdevio_no_huge 00:09:12.024 ************************************ 00:09:12.024 07:36:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:09:12.284 * Looking for test storage... 00:09:12.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.284 07:36:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:12.284 07:36:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:12.284 07:36:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:12.284 07:36:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:12.284 07:36:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:12.284 07:36:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:12.284 07:36:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:12.284 07:36:37 -- scripts/common.sh@335 -- # IFS=.-: 00:09:12.284 07:36:37 -- scripts/common.sh@335 -- # read -ra ver1 00:09:12.284 07:36:37 -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.284 07:36:37 -- scripts/common.sh@336 -- # read -ra ver2 00:09:12.284 07:36:37 -- scripts/common.sh@337 -- # local 'op=<' 00:09:12.284 07:36:37 -- scripts/common.sh@339 -- # ver1_l=2 00:09:12.284 07:36:37 -- scripts/common.sh@340 -- # ver2_l=1 00:09:12.284 07:36:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:12.284 07:36:37 -- scripts/common.sh@343 -- # case "$op" in 00:09:12.284 07:36:37 -- scripts/common.sh@344 -- # : 1 00:09:12.284 07:36:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:12.284 07:36:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.284 07:36:37 -- scripts/common.sh@364 -- # decimal 1 00:09:12.284 07:36:37 -- scripts/common.sh@352 -- # local d=1 00:09:12.284 07:36:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.284 07:36:37 -- scripts/common.sh@354 -- # echo 1 00:09:12.284 07:36:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:12.284 07:36:37 -- scripts/common.sh@365 -- # decimal 2 00:09:12.284 07:36:37 -- scripts/common.sh@352 -- # local d=2 00:09:12.284 07:36:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.284 07:36:37 -- scripts/common.sh@354 -- # echo 2 00:09:12.284 07:36:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:12.284 07:36:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:12.284 07:36:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:12.284 07:36:37 -- scripts/common.sh@367 -- # return 0 00:09:12.284 07:36:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.284 07:36:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:12.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.284 --rc genhtml_branch_coverage=1 00:09:12.284 --rc genhtml_function_coverage=1 00:09:12.284 --rc genhtml_legend=1 00:09:12.284 --rc geninfo_all_blocks=1 00:09:12.284 --rc geninfo_unexecuted_blocks=1 00:09:12.284 00:09:12.284 ' 00:09:12.284 07:36:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:12.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.284 --rc genhtml_branch_coverage=1 00:09:12.284 --rc genhtml_function_coverage=1 00:09:12.284 --rc genhtml_legend=1 00:09:12.284 --rc geninfo_all_blocks=1 00:09:12.284 --rc geninfo_unexecuted_blocks=1 00:09:12.284 00:09:12.284 ' 00:09:12.284 07:36:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:12.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.284 --rc genhtml_branch_coverage=1 00:09:12.284 --rc genhtml_function_coverage=1 00:09:12.284 --rc genhtml_legend=1 00:09:12.284 --rc geninfo_all_blocks=1 00:09:12.284 --rc geninfo_unexecuted_blocks=1 00:09:12.284 00:09:12.284 ' 00:09:12.284 07:36:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:12.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.284 --rc genhtml_branch_coverage=1 00:09:12.284 --rc genhtml_function_coverage=1 00:09:12.284 --rc genhtml_legend=1 00:09:12.284 --rc geninfo_all_blocks=1 00:09:12.284 --rc geninfo_unexecuted_blocks=1 00:09:12.284 00:09:12.284 ' 00:09:12.284 07:36:37 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.284 07:36:37 -- nvmf/common.sh@7 -- # uname -s 00:09:12.284 07:36:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.284 07:36:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.284 07:36:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.284 07:36:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.284 07:36:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.284 07:36:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.284 07:36:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.284 07:36:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.284 07:36:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.284 07:36:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.284 07:36:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:09:12.284 07:36:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:09:12.284 07:36:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.284 07:36:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.284 07:36:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.284 07:36:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.284 07:36:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.284 07:36:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.284 07:36:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.285 07:36:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.285 07:36:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.285 07:36:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.285 07:36:37 -- paths/export.sh@5 -- # export PATH 00:09:12.285 07:36:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.285 07:36:37 -- nvmf/common.sh@46 -- # : 0 00:09:12.285 07:36:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:12.285 07:36:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:12.285 07:36:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:12.285 07:36:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.285 07:36:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.285 07:36:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:12.285 07:36:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:12.285 07:36:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:12.285 07:36:37 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.285 07:36:37 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.285 07:36:37 -- target/bdevio.sh@14 -- # nvmftestinit 00:09:12.285 07:36:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:12.285 07:36:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.285 07:36:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:12.285 07:36:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:12.285 07:36:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:12.285 07:36:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.285 07:36:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.285 07:36:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.285 07:36:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:12.285 07:36:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:12.285 07:36:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:12.285 07:36:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:12.285 07:36:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:12.285 07:36:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:12.285 07:36:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.285 07:36:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.285 07:36:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.285 07:36:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:12.285 07:36:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.285 07:36:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.285 07:36:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.285 07:36:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.285 07:36:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.285 07:36:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.285 07:36:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.285 07:36:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.285 07:36:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:12.285 07:36:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:12.285 Cannot find device "nvmf_tgt_br" 00:09:12.285 07:36:37 -- nvmf/common.sh@154 -- # true 00:09:12.285 07:36:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.285 Cannot find device "nvmf_tgt_br2" 00:09:12.285 07:36:37 -- nvmf/common.sh@155 -- # true 00:09:12.285 07:36:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:12.285 07:36:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:12.285 Cannot find device "nvmf_tgt_br" 00:09:12.285 07:36:37 -- nvmf/common.sh@157 -- # true 00:09:12.285 07:36:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:12.285 Cannot find device "nvmf_tgt_br2" 00:09:12.285 07:36:37 -- nvmf/common.sh@158 -- # true 00:09:12.285 07:36:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:12.544 07:36:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:12.544 07:36:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.544 07:36:37 -- nvmf/common.sh@161 -- # true 00:09:12.544 07:36:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.544 07:36:37 -- nvmf/common.sh@162 -- # true 00:09:12.544 07:36:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.544 07:36:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.544 07:36:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.544 07:36:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.544 07:36:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.544 07:36:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.544 07:36:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.544 07:36:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.544 07:36:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.544 07:36:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:12.544 07:36:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:12.544 07:36:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:12.544 07:36:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:12.544 07:36:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.544 07:36:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.544 07:36:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.544 07:36:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:12.544 07:36:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:12.544 07:36:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.544 07:36:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.544 07:36:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.544 07:36:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.544 07:36:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.544 07:36:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:12.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:12.544 00:09:12.544 --- 10.0.0.2 ping statistics --- 00:09:12.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.544 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:12.544 07:36:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:12.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:12.544 00:09:12.544 --- 10.0.0.3 ping statistics --- 00:09:12.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.544 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:12.544 07:36:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:12.544 00:09:12.544 --- 10.0.0.1 ping statistics --- 00:09:12.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.544 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:12.544 07:36:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.544 07:36:38 -- nvmf/common.sh@421 -- # return 0 00:09:12.544 07:36:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:12.544 07:36:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.544 07:36:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:12.544 07:36:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:12.544 07:36:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.544 07:36:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:12.545 07:36:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:12.804 07:36:38 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:12.804 07:36:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:12.804 07:36:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.804 07:36:38 -- common/autotest_common.sh@10 -- # set +x 00:09:12.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.804 07:36:38 -- nvmf/common.sh@469 -- # nvmfpid=64255 00:09:12.804 07:36:38 -- nvmf/common.sh@470 -- # waitforlisten 64255 00:09:12.804 07:36:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:09:12.804 07:36:38 -- common/autotest_common.sh@829 -- # '[' -z 64255 ']' 00:09:12.804 07:36:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.804 07:36:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.804 07:36:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.804 07:36:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.804 07:36:38 -- common/autotest_common.sh@10 -- # set +x 00:09:12.804 [2024-12-02 07:36:38.225910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:12.804 [2024-12-02 07:36:38.226523] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:09:12.804 [2024-12-02 07:36:38.359795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.064 [2024-12-02 07:36:38.448292] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:13.064 [2024-12-02 07:36:38.448664] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.064 [2024-12-02 07:36:38.448714] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.064 [2024-12-02 07:36:38.448992] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.064 [2024-12-02 07:36:38.449179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:13.064 [2024-12-02 07:36:38.449459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:13.064 [2024-12-02 07:36:38.449461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.065 [2024-12-02 07:36:38.449347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:14.003 07:36:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.003 07:36:39 -- common/autotest_common.sh@862 -- # return 0 00:09:14.003 07:36:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:14.003 07:36:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.003 07:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 07:36:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.003 07:36:39 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.003 07:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 07:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 [2024-12-02 07:36:39.302584] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.003 07:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 07:36:39 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.003 07:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 07:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 Malloc0 00:09:14.003 07:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 07:36:39 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:14.003 07:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 07:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 07:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 07:36:39 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.003 07:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 07:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 07:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 07:36:39 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.003 07:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 07:36:39 -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 [2024-12-02 07:36:39.340993] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.003 07:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 07:36:39 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:09:14.003 07:36:39 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:14.003 07:36:39 -- nvmf/common.sh@520 -- # config=() 00:09:14.003 07:36:39 -- nvmf/common.sh@520 -- # local subsystem config 00:09:14.003 07:36:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:14.003 07:36:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:14.003 { 00:09:14.003 "params": { 00:09:14.003 "name": "Nvme$subsystem", 00:09:14.003 "trtype": "$TEST_TRANSPORT", 00:09:14.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.003 "adrfam": "ipv4", 00:09:14.003 "trsvcid": "$NVMF_PORT", 00:09:14.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.003 "hdgst": ${hdgst:-false}, 00:09:14.003 "ddgst": ${ddgst:-false} 00:09:14.003 }, 00:09:14.003 "method": "bdev_nvme_attach_controller" 00:09:14.003 } 00:09:14.003 EOF 00:09:14.003 )") 00:09:14.003 07:36:39 -- nvmf/common.sh@542 -- # cat 00:09:14.003 07:36:39 -- nvmf/common.sh@544 -- # jq . 00:09:14.003 07:36:39 -- nvmf/common.sh@545 -- # IFS=, 00:09:14.003 07:36:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:14.003 "params": { 00:09:14.003 "name": "Nvme1", 00:09:14.003 "trtype": "tcp", 00:09:14.003 "traddr": "10.0.0.2", 00:09:14.003 "adrfam": "ipv4", 00:09:14.003 "trsvcid": "4420", 00:09:14.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.003 "hdgst": false, 00:09:14.003 "ddgst": false 00:09:14.003 }, 00:09:14.003 "method": "bdev_nvme_attach_controller" 00:09:14.003 }' 00:09:14.003 [2024-12-02 07:36:39.391644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.003 [2024-12-02 07:36:39.391712] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64291 ] 00:09:14.003 [2024-12-02 07:36:39.525215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.262 [2024-12-02 07:36:39.658003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.262 [2024-12-02 07:36:39.658172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.262 [2024-12-02 07:36:39.658181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.262 [2024-12-02 07:36:39.824445] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:14.262 [2024-12-02 07:36:39.824737] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:14.262 I/O targets: 00:09:14.262 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:14.262 00:09:14.262 00:09:14.262 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.262 http://cunit.sourceforge.net/ 00:09:14.262 00:09:14.262 00:09:14.262 Suite: bdevio tests on: Nvme1n1 00:09:14.262 Test: blockdev write read block ...passed 00:09:14.262 Test: blockdev write zeroes read block ...passed 00:09:14.262 Test: blockdev write zeroes read no split ...passed 00:09:14.262 Test: blockdev write zeroes read split ...passed 00:09:14.262 Test: blockdev write zeroes read split partial ...passed 00:09:14.262 Test: blockdev reset ...[2024-12-02 07:36:39.866875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:14.262 [2024-12-02 07:36:39.867154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6680 (9): Bad file descriptor 00:09:14.262 [2024-12-02 07:36:39.883516] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:14.262 passed 00:09:14.521 Test: blockdev write read 8 blocks ...passed 00:09:14.521 Test: blockdev write read size > 128k ...passed 00:09:14.521 Test: blockdev write read invalid size ...passed 00:09:14.521 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.521 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.521 Test: blockdev write read max offset ...passed 00:09:14.521 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.521 Test: blockdev writev readv 8 blocks ...passed 00:09:14.521 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.521 Test: blockdev writev readv block ...passed 00:09:14.521 Test: blockdev writev readv size > 128k ...passed 00:09:14.521 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.521 Test: blockdev comparev and writev ...[2024-12-02 07:36:39.894959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.895363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.895402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.895419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.895750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.895770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.895791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.895803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.896085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.896104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.896124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.896135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.896445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.896465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.896485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:14.521 [2024-12-02 07:36:39.896497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:14.521 passed 00:09:14.521 Test: blockdev nvme passthru rw ...passed 00:09:14.521 Test: blockdev nvme passthru vendor specific ...[2024-12-02 07:36:39.898008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.521 [2024-12-02 07:36:39.898172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.898330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.521 [2024-12-02 07:36:39.898352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:14.521 [2024-12-02 07:36:39.898554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.522 [2024-12-02 07:36:39.898702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:14.522 passed 00:09:14.522 Test: blockdev nvme admin passthru ...[2024-12-02 07:36:39.899258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:14.522 [2024-12-02 07:36:39.899313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:14.522 passed 00:09:14.522 Test: blockdev copy ...passed 00:09:14.522 00:09:14.522 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.522 suites 1 1 n/a 0 0 00:09:14.522 tests 23 23 23 0 0 00:09:14.522 asserts 152 152 152 0 n/a 00:09:14.522 00:09:14.522 Elapsed time = 0.176 seconds 00:09:14.781 07:36:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.781 07:36:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.781 07:36:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.781 07:36:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.781 07:36:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:14.781 07:36:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:09:14.781 07:36:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:14.781 07:36:40 -- nvmf/common.sh@116 -- # sync 00:09:14.781 07:36:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:14.781 07:36:40 -- nvmf/common.sh@119 -- # set +e 00:09:14.781 07:36:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:14.781 07:36:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:14.781 rmmod nvme_tcp 00:09:14.781 rmmod nvme_fabrics 00:09:14.781 rmmod nvme_keyring 00:09:14.781 07:36:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:14.781 07:36:40 -- nvmf/common.sh@123 -- # set -e 00:09:14.781 07:36:40 -- nvmf/common.sh@124 -- # return 0 00:09:14.781 07:36:40 -- nvmf/common.sh@477 -- # '[' -n 64255 ']' 00:09:14.781 07:36:40 -- nvmf/common.sh@478 -- # killprocess 64255 00:09:14.781 07:36:40 -- common/autotest_common.sh@936 -- # '[' -z 64255 ']' 00:09:14.781 07:36:40 -- common/autotest_common.sh@940 -- # kill -0 64255 00:09:14.781 07:36:40 -- common/autotest_common.sh@941 -- # uname 00:09:14.781 07:36:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:14.781 07:36:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64255 00:09:14.781 killing process with pid 64255 00:09:14.781 07:36:40 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:09:14.781 07:36:40 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:09:14.781 07:36:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64255' 00:09:14.781 07:36:40 -- common/autotest_common.sh@955 -- # kill 64255 00:09:14.781 07:36:40 -- common/autotest_common.sh@960 -- # wait 64255 00:09:15.350 07:36:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:15.350 07:36:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:15.350 07:36:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:15.350 07:36:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.350 07:36:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:15.350 07:36:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.350 07:36:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.350 07:36:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.350 07:36:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:15.350 00:09:15.350 real 0m3.099s 00:09:15.350 user 0m10.197s 00:09:15.350 sys 0m1.116s 00:09:15.350 07:36:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.350 ************************************ 00:09:15.350 END TEST nvmf_bdevio_no_huge 00:09:15.350 ************************************ 00:09:15.350 07:36:40 -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 07:36:40 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:09:15.350 07:36:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:15.350 07:36:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.350 07:36:40 -- common/autotest_common.sh@10 -- # set +x 00:09:15.350 ************************************ 00:09:15.350 START TEST nvmf_tls 00:09:15.350 ************************************ 00:09:15.350 07:36:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:09:15.350 * Looking for test storage... 00:09:15.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:15.350 07:36:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:15.350 07:36:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:15.350 07:36:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:15.350 07:36:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:15.350 07:36:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:15.350 07:36:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:15.350 07:36:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:15.350 07:36:40 -- scripts/common.sh@335 -- # IFS=.-: 00:09:15.350 07:36:40 -- scripts/common.sh@335 -- # read -ra ver1 00:09:15.350 07:36:40 -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.350 07:36:40 -- scripts/common.sh@336 -- # read -ra ver2 00:09:15.350 07:36:40 -- scripts/common.sh@337 -- # local 'op=<' 00:09:15.350 07:36:40 -- scripts/common.sh@339 -- # ver1_l=2 00:09:15.350 07:36:40 -- scripts/common.sh@340 -- # ver2_l=1 00:09:15.350 07:36:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:15.350 07:36:40 -- scripts/common.sh@343 -- # case "$op" in 00:09:15.350 07:36:40 -- scripts/common.sh@344 -- # : 1 00:09:15.350 07:36:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:15.350 07:36:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.350 07:36:40 -- scripts/common.sh@364 -- # decimal 1 00:09:15.350 07:36:40 -- scripts/common.sh@352 -- # local d=1 00:09:15.350 07:36:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.350 07:36:40 -- scripts/common.sh@354 -- # echo 1 00:09:15.350 07:36:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:15.350 07:36:40 -- scripts/common.sh@365 -- # decimal 2 00:09:15.350 07:36:40 -- scripts/common.sh@352 -- # local d=2 00:09:15.350 07:36:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.350 07:36:40 -- scripts/common.sh@354 -- # echo 2 00:09:15.350 07:36:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:15.350 07:36:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:15.350 07:36:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:15.350 07:36:40 -- scripts/common.sh@367 -- # return 0 00:09:15.350 07:36:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.350 07:36:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.350 --rc genhtml_branch_coverage=1 00:09:15.350 --rc genhtml_function_coverage=1 00:09:15.350 --rc genhtml_legend=1 00:09:15.350 --rc geninfo_all_blocks=1 00:09:15.350 --rc geninfo_unexecuted_blocks=1 00:09:15.350 00:09:15.350 ' 00:09:15.350 07:36:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.350 --rc genhtml_branch_coverage=1 00:09:15.350 --rc genhtml_function_coverage=1 00:09:15.350 --rc genhtml_legend=1 00:09:15.350 --rc geninfo_all_blocks=1 00:09:15.350 --rc geninfo_unexecuted_blocks=1 00:09:15.350 00:09:15.350 ' 00:09:15.350 07:36:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.350 --rc genhtml_branch_coverage=1 00:09:15.350 --rc genhtml_function_coverage=1 00:09:15.350 --rc genhtml_legend=1 00:09:15.350 --rc geninfo_all_blocks=1 00:09:15.350 --rc geninfo_unexecuted_blocks=1 00:09:15.350 00:09:15.350 ' 00:09:15.350 07:36:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.350 --rc genhtml_branch_coverage=1 00:09:15.350 --rc genhtml_function_coverage=1 00:09:15.350 --rc genhtml_legend=1 00:09:15.350 --rc geninfo_all_blocks=1 00:09:15.350 --rc geninfo_unexecuted_blocks=1 00:09:15.350 00:09:15.350 ' 00:09:15.350 07:36:40 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.350 07:36:40 -- nvmf/common.sh@7 -- # uname -s 00:09:15.350 07:36:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.350 07:36:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.350 07:36:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.350 07:36:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.350 07:36:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.350 07:36:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.350 07:36:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.350 07:36:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.350 07:36:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.350 07:36:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.350 07:36:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:09:15.350 07:36:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:09:15.350 07:36:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.350 07:36:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.350 07:36:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:15.350 07:36:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.350 07:36:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.350 07:36:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.350 07:36:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.350 07:36:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.351 07:36:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.351 07:36:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.351 07:36:40 -- paths/export.sh@5 -- # export PATH 00:09:15.351 07:36:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.351 07:36:40 -- nvmf/common.sh@46 -- # : 0 00:09:15.351 07:36:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:15.351 07:36:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:15.351 07:36:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:15.351 07:36:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.351 07:36:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.351 07:36:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:15.351 07:36:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:15.351 07:36:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:15.351 07:36:40 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.351 07:36:40 -- target/tls.sh@71 -- # nvmftestinit 00:09:15.351 07:36:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:15.351 07:36:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.351 07:36:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:15.351 07:36:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:15.351 07:36:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:15.351 07:36:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.351 07:36:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.351 07:36:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.609 07:36:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:15.609 07:36:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:15.609 07:36:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:15.609 07:36:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:15.609 07:36:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:15.609 07:36:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:15.609 07:36:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.609 07:36:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.609 07:36:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:15.609 07:36:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:15.609 07:36:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:15.609 07:36:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:15.609 07:36:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:15.609 07:36:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.609 07:36:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:15.609 07:36:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:15.609 07:36:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:15.609 07:36:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:15.609 07:36:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:15.609 07:36:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:15.609 Cannot find device "nvmf_tgt_br" 00:09:15.609 07:36:41 -- nvmf/common.sh@154 -- # true 00:09:15.609 07:36:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.609 Cannot find device "nvmf_tgt_br2" 00:09:15.609 07:36:41 -- nvmf/common.sh@155 -- # true 00:09:15.609 07:36:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:15.609 07:36:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:15.609 Cannot find device "nvmf_tgt_br" 00:09:15.609 07:36:41 -- nvmf/common.sh@157 -- # true 00:09:15.609 07:36:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:15.609 Cannot find device "nvmf_tgt_br2" 00:09:15.609 07:36:41 -- nvmf/common.sh@158 -- # true 00:09:15.609 07:36:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:15.609 07:36:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:15.609 07:36:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.609 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.609 07:36:41 -- nvmf/common.sh@161 -- # true 00:09:15.609 07:36:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.609 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.609 07:36:41 -- nvmf/common.sh@162 -- # true 00:09:15.609 07:36:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:15.609 07:36:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:15.609 07:36:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:15.609 07:36:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:15.609 07:36:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:15.609 07:36:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:15.609 07:36:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:15.609 07:36:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:15.609 07:36:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:15.609 07:36:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:15.609 07:36:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:15.610 07:36:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:15.610 07:36:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:15.610 07:36:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.610 07:36:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.610 07:36:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.610 07:36:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:15.610 07:36:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:15.610 07:36:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.869 07:36:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.869 07:36:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.869 07:36:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.869 07:36:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.869 07:36:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:15.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:15.869 00:09:15.869 --- 10.0.0.2 ping statistics --- 00:09:15.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.869 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:15.869 07:36:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:15.869 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.869 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:09:15.869 00:09:15.869 --- 10.0.0.3 ping statistics --- 00:09:15.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.869 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:15.869 07:36:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:09:15.869 00:09:15.869 --- 10.0.0.1 ping statistics --- 00:09:15.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.869 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:15.869 07:36:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.869 07:36:41 -- nvmf/common.sh@421 -- # return 0 00:09:15.869 07:36:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:15.869 07:36:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.869 07:36:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:15.869 07:36:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:15.869 07:36:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.869 07:36:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:15.869 07:36:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:15.869 07:36:41 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:09:15.869 07:36:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:15.869 07:36:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.869 07:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.869 07:36:41 -- nvmf/common.sh@469 -- # nvmfpid=64477 00:09:15.869 07:36:41 -- nvmf/common.sh@470 -- # waitforlisten 64477 00:09:15.869 07:36:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:09:15.869 07:36:41 -- common/autotest_common.sh@829 -- # '[' -z 64477 ']' 00:09:15.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.869 07:36:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.869 07:36:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.869 07:36:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.869 07:36:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.869 07:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.869 [2024-12-02 07:36:41.349997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.869 [2024-12-02 07:36:41.350461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.869 [2024-12-02 07:36:41.475890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.127 [2024-12-02 07:36:41.524596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.127 [2024-12-02 07:36:41.524779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.127 [2024-12-02 07:36:41.524792] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.127 [2024-12-02 07:36:41.524799] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.127 [2024-12-02 07:36:41.524825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.694 07:36:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.694 07:36:42 -- common/autotest_common.sh@862 -- # return 0 00:09:16.694 07:36:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:16.694 07:36:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.694 07:36:42 -- common/autotest_common.sh@10 -- # set +x 00:09:16.953 07:36:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.953 07:36:42 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:09:16.953 07:36:42 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:09:17.212 true 00:09:17.212 07:36:42 -- target/tls.sh@82 -- # jq -r .tls_version 00:09:17.212 07:36:42 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:17.212 07:36:42 -- target/tls.sh@82 -- # version=0 00:09:17.212 07:36:42 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:09:17.212 07:36:42 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:09:17.470 07:36:42 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:17.470 07:36:42 -- target/tls.sh@90 -- # jq -r .tls_version 00:09:17.728 07:36:43 -- target/tls.sh@90 -- # version=13 00:09:17.728 07:36:43 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:09:17.728 07:36:43 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:09:17.987 07:36:43 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:17.987 07:36:43 -- target/tls.sh@98 -- # jq -r .tls_version 00:09:18.245 07:36:43 -- target/tls.sh@98 -- # version=7 00:09:18.245 07:36:43 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:09:18.245 07:36:43 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:18.245 07:36:43 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:09:18.245 07:36:43 -- target/tls.sh@105 -- # ktls=false 00:09:18.245 07:36:43 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:09:18.245 07:36:43 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:09:18.504 07:36:44 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:18.504 07:36:44 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:09:18.763 07:36:44 -- target/tls.sh@113 -- # ktls=true 00:09:18.763 07:36:44 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:09:18.763 07:36:44 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:09:19.021 07:36:44 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:19.021 07:36:44 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:09:19.280 07:36:44 -- target/tls.sh@121 -- # ktls=false 00:09:19.280 07:36:44 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:09:19.280 07:36:44 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:09:19.280 07:36:44 -- target/tls.sh@49 -- # local key hash crc 00:09:19.280 07:36:44 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:09:19.280 07:36:44 -- target/tls.sh@51 -- # hash=01 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # gzip -1 -c 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # tail -c8 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # head -c 4 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # crc='p$H�' 00:09:19.280 07:36:44 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:09:19.280 07:36:44 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:09:19.280 07:36:44 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:19.280 07:36:44 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:19.280 07:36:44 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:09:19.280 07:36:44 -- target/tls.sh@49 -- # local key hash crc 00:09:19.280 07:36:44 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:09:19.280 07:36:44 -- target/tls.sh@51 -- # hash=01 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # gzip -1 -c 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # tail -c8 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # head -c 4 00:09:19.280 07:36:44 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:09:19.280 07:36:44 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:09:19.280 07:36:44 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:09:19.280 07:36:44 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:19.280 07:36:44 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:19.280 07:36:44 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:19.280 07:36:44 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:19.280 07:36:44 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:19.280 07:36:44 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:19.280 07:36:44 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:19.280 07:36:44 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:19.280 07:36:44 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:09:19.540 07:36:45 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:19.798 07:36:45 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:19.798 07:36:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:19.798 07:36:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:09:20.056 [2024-12-02 07:36:45.636471] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.056 07:36:45 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:09:20.315 07:36:45 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:09:20.574 [2024-12-02 07:36:46.084569] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:09:20.574 [2024-12-02 07:36:46.085025] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.574 07:36:46 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:09:20.833 malloc0 00:09:20.833 07:36:46 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:21.092 07:36:46 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:21.351 07:36:46 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:31.327 Initializing NVMe Controllers 00:09:31.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:31.327 Initialization complete. Launching workers. 00:09:31.327 ======================================================== 00:09:31.327 Latency(us) 00:09:31.327 Device Information : IOPS MiB/s Average min max 00:09:31.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11866.51 46.35 5394.57 1460.53 7713.71 00:09:31.327 ======================================================== 00:09:31.327 Total : 11866.51 46.35 5394.57 1460.53 7713.71 00:09:31.327 00:09:31.327 07:36:56 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:31.327 07:36:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:31.327 07:36:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:09:31.327 07:36:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:09:31.327 07:36:56 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:09:31.327 07:36:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:31.327 07:36:56 -- target/tls.sh@28 -- # bdevperf_pid=64720 00:09:31.327 07:36:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:31.327 07:36:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:31.327 07:36:56 -- target/tls.sh@31 -- # waitforlisten 64720 /var/tmp/bdevperf.sock 00:09:31.327 07:36:56 -- common/autotest_common.sh@829 -- # '[' -z 64720 ']' 00:09:31.327 07:36:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:31.327 07:36:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.327 07:36:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:31.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:31.327 07:36:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.327 07:36:56 -- common/autotest_common.sh@10 -- # set +x 00:09:31.586 [2024-12-02 07:36:56.961204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.586 [2024-12-02 07:36:56.961518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64720 ] 00:09:31.586 [2024-12-02 07:36:57.101952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.586 [2024-12-02 07:36:57.169419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.522 07:36:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.522 07:36:57 -- common/autotest_common.sh@862 -- # return 0 00:09:32.522 07:36:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:32.522 [2024-12-02 07:36:58.096332] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:09:32.780 TLSTESTn1 00:09:32.780 07:36:58 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:09:32.780 Running I/O for 10 seconds... 00:09:42.780 00:09:42.780 Latency(us) 00:09:42.780 [2024-12-02T07:37:08.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.780 [2024-12-02T07:37:08.404Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:09:42.780 Verification LBA range: start 0x0 length 0x2000 00:09:42.780 TLSTESTn1 : 10.01 7377.31 28.82 0.00 0.00 17325.88 3232.12 18826.71 00:09:42.780 [2024-12-02T07:37:08.404Z] =================================================================================================================== 00:09:42.780 [2024-12-02T07:37:08.404Z] Total : 7377.31 28.82 0.00 0.00 17325.88 3232.12 18826.71 00:09:42.780 0 00:09:42.780 07:37:08 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:42.780 07:37:08 -- target/tls.sh@45 -- # killprocess 64720 00:09:42.780 07:37:08 -- common/autotest_common.sh@936 -- # '[' -z 64720 ']' 00:09:42.780 07:37:08 -- common/autotest_common.sh@940 -- # kill -0 64720 00:09:42.780 07:37:08 -- common/autotest_common.sh@941 -- # uname 00:09:42.780 07:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:42.780 07:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64720 00:09:42.780 killing process with pid 64720 00:09:42.780 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.780 00:09:42.780 Latency(us) 00:09:42.780 [2024-12-02T07:37:08.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.780 [2024-12-02T07:37:08.404Z] =================================================================================================================== 00:09:42.780 [2024-12-02T07:37:08.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.780 07:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:42.780 07:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:42.780 07:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64720' 00:09:42.780 07:37:08 -- common/autotest_common.sh@955 -- # kill 64720 00:09:42.780 07:37:08 -- common/autotest_common.sh@960 -- # wait 64720 00:09:43.039 07:37:08 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:43.039 07:37:08 -- common/autotest_common.sh@650 -- # local es=0 00:09:43.039 07:37:08 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:43.039 07:37:08 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:09:43.039 07:37:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.039 07:37:08 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:09:43.039 07:37:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.039 07:37:08 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:43.039 07:37:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:43.039 07:37:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:09:43.039 07:37:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:09:43.039 07:37:08 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:09:43.039 07:37:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:43.039 07:37:08 -- target/tls.sh@28 -- # bdevperf_pid=64853 00:09:43.039 07:37:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:43.039 07:37:08 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:43.039 07:37:08 -- target/tls.sh@31 -- # waitforlisten 64853 /var/tmp/bdevperf.sock 00:09:43.039 07:37:08 -- common/autotest_common.sh@829 -- # '[' -z 64853 ']' 00:09:43.039 07:37:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:43.039 07:37:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.039 07:37:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:43.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:43.039 07:37:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.039 07:37:08 -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 [2024-12-02 07:37:08.554955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:43.039 [2024-12-02 07:37:08.555235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64853 ] 00:09:43.298 [2024-12-02 07:37:08.694330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.298 [2024-12-02 07:37:08.744343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.234 07:37:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.234 07:37:09 -- common/autotest_common.sh@862 -- # return 0 00:09:44.235 07:37:09 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:44.235 [2024-12-02 07:37:09.756986] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:09:44.235 [2024-12-02 07:37:09.766001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:44.235 [2024-12-02 07:37:09.766382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e650 (107): Transport endpoint is not connected 00:09:44.235 [2024-12-02 07:37:09.767376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e650 (9): Bad file descriptor 00:09:44.235 [2024-12-02 07:37:09.768371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:09:44.235 [2024-12-02 07:37:09.768414] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:09:44.235 [2024-12-02 07:37:09.768425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:44.235 request: 00:09:44.235 { 00:09:44.235 "name": "TLSTEST", 00:09:44.235 "trtype": "tcp", 00:09:44.235 "traddr": "10.0.0.2", 00:09:44.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.235 "adrfam": "ipv4", 00:09:44.235 "trsvcid": "4420", 00:09:44.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.235 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:09:44.235 "method": "bdev_nvme_attach_controller", 00:09:44.235 "req_id": 1 00:09:44.235 } 00:09:44.235 Got JSON-RPC error response 00:09:44.235 response: 00:09:44.235 { 00:09:44.235 "code": -32602, 00:09:44.235 "message": "Invalid parameters" 00:09:44.235 } 00:09:44.235 07:37:09 -- target/tls.sh@36 -- # killprocess 64853 00:09:44.235 07:37:09 -- common/autotest_common.sh@936 -- # '[' -z 64853 ']' 00:09:44.235 07:37:09 -- common/autotest_common.sh@940 -- # kill -0 64853 00:09:44.235 07:37:09 -- common/autotest_common.sh@941 -- # uname 00:09:44.235 07:37:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.235 07:37:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64853 00:09:44.235 killing process with pid 64853 00:09:44.235 Received shutdown signal, test time was about 10.000000 seconds 00:09:44.235 00:09:44.235 Latency(us) 00:09:44.235 [2024-12-02T07:37:09.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.235 [2024-12-02T07:37:09.859Z] =================================================================================================================== 00:09:44.235 [2024-12-02T07:37:09.859Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:09:44.235 07:37:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:44.235 07:37:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:44.235 07:37:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64853' 00:09:44.235 07:37:09 -- common/autotest_common.sh@955 -- # kill 64853 00:09:44.235 07:37:09 -- common/autotest_common.sh@960 -- # wait 64853 00:09:44.494 07:37:09 -- target/tls.sh@37 -- # return 1 00:09:44.494 07:37:09 -- common/autotest_common.sh@653 -- # es=1 00:09:44.494 07:37:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.494 07:37:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:44.494 07:37:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.494 07:37:09 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:44.494 07:37:09 -- common/autotest_common.sh@650 -- # local es=0 00:09:44.494 07:37:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:44.494 07:37:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:09:44.494 07:37:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.494 07:37:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:09:44.494 07:37:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.494 07:37:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:44.494 07:37:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:44.494 07:37:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:09:44.494 07:37:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:09:44.494 07:37:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:09:44.494 07:37:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:44.494 07:37:09 -- target/tls.sh@28 -- # bdevperf_pid=64875 00:09:44.494 07:37:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:44.494 07:37:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:44.494 07:37:09 -- target/tls.sh@31 -- # waitforlisten 64875 /var/tmp/bdevperf.sock 00:09:44.494 07:37:09 -- common/autotest_common.sh@829 -- # '[' -z 64875 ']' 00:09:44.494 07:37:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:44.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:44.494 07:37:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.494 07:37:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:44.494 07:37:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.494 07:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:44.494 [2024-12-02 07:37:10.032400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:44.494 [2024-12-02 07:37:10.032501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64875 ] 00:09:44.753 [2024-12-02 07:37:10.169557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.753 [2024-12-02 07:37:10.221183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.688 07:37:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.688 07:37:10 -- common/autotest_common.sh@862 -- # return 0 00:09:45.688 07:37:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:45.688 [2024-12-02 07:37:11.213261] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:09:45.688 [2024-12-02 07:37:11.218022] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:09:45.688 [2024-12-02 07:37:11.218265] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:09:45.688 [2024-12-02 07:37:11.218513] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:45.688 [2024-12-02 07:37:11.218860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80c650 (107): Transport endpoint is not connected 00:09:45.688 [2024-12-02 07:37:11.219842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80c650 (9): Bad file descriptor 00:09:45.688 [2024-12-02 07:37:11.220837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:09:45.688 [2024-12-02 07:37:11.221205] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:09:45.688 [2024-12-02 07:37:11.221427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:45.688 request: 00:09:45.688 { 00:09:45.688 "name": "TLSTEST", 00:09:45.688 "trtype": "tcp", 00:09:45.688 "traddr": "10.0.0.2", 00:09:45.688 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:09:45.688 "adrfam": "ipv4", 00:09:45.688 "trsvcid": "4420", 00:09:45.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.688 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:09:45.688 "method": "bdev_nvme_attach_controller", 00:09:45.688 "req_id": 1 00:09:45.688 } 00:09:45.688 Got JSON-RPC error response 00:09:45.688 response: 00:09:45.688 { 00:09:45.688 "code": -32602, 00:09:45.688 "message": "Invalid parameters" 00:09:45.688 } 00:09:45.688 07:37:11 -- target/tls.sh@36 -- # killprocess 64875 00:09:45.688 07:37:11 -- common/autotest_common.sh@936 -- # '[' -z 64875 ']' 00:09:45.688 07:37:11 -- common/autotest_common.sh@940 -- # kill -0 64875 00:09:45.688 07:37:11 -- common/autotest_common.sh@941 -- # uname 00:09:45.688 07:37:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:45.688 07:37:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64875 00:09:45.688 killing process with pid 64875 00:09:45.688 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.688 00:09:45.688 Latency(us) 00:09:45.688 [2024-12-02T07:37:11.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.688 [2024-12-02T07:37:11.312Z] =================================================================================================================== 00:09:45.688 [2024-12-02T07:37:11.312Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:09:45.688 07:37:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:45.688 07:37:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:45.688 07:37:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64875' 00:09:45.688 07:37:11 -- common/autotest_common.sh@955 -- # kill 64875 00:09:45.689 07:37:11 -- common/autotest_common.sh@960 -- # wait 64875 00:09:45.948 07:37:11 -- target/tls.sh@37 -- # return 1 00:09:45.948 07:37:11 -- common/autotest_common.sh@653 -- # es=1 00:09:45.948 07:37:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:45.948 07:37:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:45.948 07:37:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:45.948 07:37:11 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:45.948 07:37:11 -- common/autotest_common.sh@650 -- # local es=0 00:09:45.948 07:37:11 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:45.948 07:37:11 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:09:45.948 07:37:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.948 07:37:11 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:09:45.948 07:37:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.948 07:37:11 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:45.948 07:37:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:45.948 07:37:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:09:45.948 07:37:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:09:45.948 07:37:11 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:09:45.948 07:37:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:45.948 07:37:11 -- target/tls.sh@28 -- # bdevperf_pid=64903 00:09:45.948 07:37:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:45.948 07:37:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:45.948 07:37:11 -- target/tls.sh@31 -- # waitforlisten 64903 /var/tmp/bdevperf.sock 00:09:45.948 07:37:11 -- common/autotest_common.sh@829 -- # '[' -z 64903 ']' 00:09:45.949 07:37:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:45.949 07:37:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.949 07:37:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:45.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:45.949 07:37:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.949 07:37:11 -- common/autotest_common.sh@10 -- # set +x 00:09:45.949 [2024-12-02 07:37:11.491367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:45.949 [2024-12-02 07:37:11.491649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64903 ] 00:09:46.208 [2024-12-02 07:37:11.625344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.208 [2024-12-02 07:37:11.675889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.144 07:37:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.144 07:37:12 -- common/autotest_common.sh@862 -- # return 0 00:09:47.144 07:37:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:47.144 [2024-12-02 07:37:12.599618] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:09:47.144 [2024-12-02 07:37:12.606294] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:09:47.144 [2024-12-02 07:37:12.606337] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:09:47.144 [2024-12-02 07:37:12.606385] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:47.144 [2024-12-02 07:37:12.606953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed5650 (107): Transport endpoint is not connected 00:09:47.144 [2024-12-02 07:37:12.607960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed5650 (9): Bad file descriptor 00:09:47.144 [2024-12-02 07:37:12.608942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:09:47.144 [2024-12-02 07:37:12.608970] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:09:47.144 [2024-12-02 07:37:12.608997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:09:47.144 request: 00:09:47.144 { 00:09:47.144 "name": "TLSTEST", 00:09:47.144 "trtype": "tcp", 00:09:47.144 "traddr": "10.0.0.2", 00:09:47.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.144 "adrfam": "ipv4", 00:09:47.144 "trsvcid": "4420", 00:09:47.144 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:09:47.144 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:09:47.144 "method": "bdev_nvme_attach_controller", 00:09:47.144 "req_id": 1 00:09:47.144 } 00:09:47.144 Got JSON-RPC error response 00:09:47.144 response: 00:09:47.144 { 00:09:47.144 "code": -32602, 00:09:47.144 "message": "Invalid parameters" 00:09:47.144 } 00:09:47.144 07:37:12 -- target/tls.sh@36 -- # killprocess 64903 00:09:47.144 07:37:12 -- common/autotest_common.sh@936 -- # '[' -z 64903 ']' 00:09:47.144 07:37:12 -- common/autotest_common.sh@940 -- # kill -0 64903 00:09:47.144 07:37:12 -- common/autotest_common.sh@941 -- # uname 00:09:47.144 07:37:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:47.144 07:37:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64903 00:09:47.144 killing process with pid 64903 00:09:47.144 Received shutdown signal, test time was about 10.000000 seconds 00:09:47.144 00:09:47.144 Latency(us) 00:09:47.144 [2024-12-02T07:37:12.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.144 [2024-12-02T07:37:12.768Z] =================================================================================================================== 00:09:47.144 [2024-12-02T07:37:12.768Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:09:47.144 07:37:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:47.144 07:37:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:47.144 07:37:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64903' 00:09:47.144 07:37:12 -- common/autotest_common.sh@955 -- # kill 64903 00:09:47.144 07:37:12 -- common/autotest_common.sh@960 -- # wait 64903 00:09:47.403 07:37:12 -- target/tls.sh@37 -- # return 1 00:09:47.403 07:37:12 -- common/autotest_common.sh@653 -- # es=1 00:09:47.403 07:37:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.403 07:37:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.403 07:37:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.403 07:37:12 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:09:47.403 07:37:12 -- common/autotest_common.sh@650 -- # local es=0 00:09:47.403 07:37:12 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:09:47.403 07:37:12 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:09:47.403 07:37:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.403 07:37:12 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:09:47.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:47.403 07:37:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.403 07:37:12 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:09:47.403 07:37:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:47.403 07:37:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:09:47.403 07:37:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:09:47.403 07:37:12 -- target/tls.sh@23 -- # psk= 00:09:47.403 07:37:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:47.403 07:37:12 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:47.403 07:37:12 -- target/tls.sh@28 -- # bdevperf_pid=64930 00:09:47.403 07:37:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:47.403 07:37:12 -- target/tls.sh@31 -- # waitforlisten 64930 /var/tmp/bdevperf.sock 00:09:47.403 07:37:12 -- common/autotest_common.sh@829 -- # '[' -z 64930 ']' 00:09:47.403 07:37:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:47.403 07:37:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.403 07:37:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:47.403 07:37:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.403 07:37:12 -- common/autotest_common.sh@10 -- # set +x 00:09:47.403 [2024-12-02 07:37:12.876037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:47.403 [2024-12-02 07:37:12.876920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64930 ] 00:09:47.403 [2024-12-02 07:37:13.006177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.677 [2024-12-02 07:37:13.057932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.677 07:37:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.677 07:37:13 -- common/autotest_common.sh@862 -- # return 0 00:09:47.677 07:37:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:09:47.970 [2024-12-02 07:37:13.337286] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:09:47.970 [2024-12-02 07:37:13.339219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe61010 (9): Bad file descriptor 00:09:47.970 [2024-12-02 07:37:13.340216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:09:47.970 [2024-12-02 07:37:13.340915] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:09:47.970 [2024-12-02 07:37:13.341350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:47.970 request: 00:09:47.970 { 00:09:47.970 "name": "TLSTEST", 00:09:47.970 "trtype": "tcp", 00:09:47.970 "traddr": "10.0.0.2", 00:09:47.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.970 "adrfam": "ipv4", 00:09:47.970 "trsvcid": "4420", 00:09:47.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.970 "method": "bdev_nvme_attach_controller", 00:09:47.970 "req_id": 1 00:09:47.970 } 00:09:47.970 Got JSON-RPC error response 00:09:47.970 response: 00:09:47.970 { 00:09:47.970 "code": -32602, 00:09:47.970 "message": "Invalid parameters" 00:09:47.970 } 00:09:47.970 07:37:13 -- target/tls.sh@36 -- # killprocess 64930 00:09:47.970 07:37:13 -- common/autotest_common.sh@936 -- # '[' -z 64930 ']' 00:09:47.970 07:37:13 -- common/autotest_common.sh@940 -- # kill -0 64930 00:09:47.970 07:37:13 -- common/autotest_common.sh@941 -- # uname 00:09:47.970 07:37:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:47.970 07:37:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64930 00:09:47.970 killing process with pid 64930 00:09:47.970 Received shutdown signal, test time was about 10.000000 seconds 00:09:47.970 00:09:47.970 Latency(us) 00:09:47.970 [2024-12-02T07:37:13.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.970 [2024-12-02T07:37:13.594Z] =================================================================================================================== 00:09:47.970 [2024-12-02T07:37:13.594Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:09:47.970 07:37:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:47.970 07:37:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:47.970 07:37:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64930' 00:09:47.970 07:37:13 -- common/autotest_common.sh@955 -- # kill 64930 00:09:47.970 07:37:13 -- common/autotest_common.sh@960 -- # wait 64930 00:09:47.970 07:37:13 -- target/tls.sh@37 -- # return 1 00:09:47.970 07:37:13 -- common/autotest_common.sh@653 -- # es=1 00:09:47.970 07:37:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.970 07:37:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.970 07:37:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.970 07:37:13 -- target/tls.sh@167 -- # killprocess 64477 00:09:47.970 07:37:13 -- common/autotest_common.sh@936 -- # '[' -z 64477 ']' 00:09:47.970 07:37:13 -- common/autotest_common.sh@940 -- # kill -0 64477 00:09:47.970 07:37:13 -- common/autotest_common.sh@941 -- # uname 00:09:47.970 07:37:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:47.970 07:37:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64477 00:09:47.970 killing process with pid 64477 00:09:47.970 07:37:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:47.970 07:37:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:47.970 07:37:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64477' 00:09:47.970 07:37:13 -- common/autotest_common.sh@955 -- # kill 64477 00:09:47.970 07:37:13 -- common/autotest_common.sh@960 -- # wait 64477 00:09:48.233 07:37:13 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:09:48.233 07:37:13 -- target/tls.sh@49 -- # local key hash crc 00:09:48.233 07:37:13 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:09:48.233 07:37:13 -- target/tls.sh@51 -- # hash=02 00:09:48.233 07:37:13 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:09:48.233 07:37:13 -- target/tls.sh@52 -- # gzip -1 -c 00:09:48.233 07:37:13 -- target/tls.sh@52 -- # tail -c8 00:09:48.233 07:37:13 -- target/tls.sh@52 -- # head -c 4 00:09:48.233 07:37:13 -- target/tls.sh@52 -- # crc='�e�'\''' 00:09:48.233 07:37:13 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:09:48.233 07:37:13 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:09:48.233 07:37:13 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:09:48.233 07:37:13 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:09:48.233 07:37:13 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:09:48.233 07:37:13 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:09:48.233 07:37:13 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:09:48.233 07:37:13 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:09:48.233 07:37:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:48.233 07:37:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:48.233 07:37:13 -- common/autotest_common.sh@10 -- # set +x 00:09:48.233 07:37:13 -- nvmf/common.sh@469 -- # nvmfpid=64964 00:09:48.233 07:37:13 -- nvmf/common.sh@470 -- # waitforlisten 64964 00:09:48.233 07:37:13 -- common/autotest_common.sh@829 -- # '[' -z 64964 ']' 00:09:48.233 07:37:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.233 07:37:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.233 07:37:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:48.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.233 07:37:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.233 07:37:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.233 07:37:13 -- common/autotest_common.sh@10 -- # set +x 00:09:48.233 [2024-12-02 07:37:13.841944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:48.233 [2024-12-02 07:37:13.842037] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.492 [2024-12-02 07:37:13.979296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.492 [2024-12-02 07:37:14.027661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:48.492 [2024-12-02 07:37:14.027815] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.492 [2024-12-02 07:37:14.027827] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.492 [2024-12-02 07:37:14.027834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.492 [2024-12-02 07:37:14.027861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.429 07:37:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.429 07:37:14 -- common/autotest_common.sh@862 -- # return 0 00:09:49.429 07:37:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:49.429 07:37:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:49.429 07:37:14 -- common/autotest_common.sh@10 -- # set +x 00:09:49.429 07:37:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.429 07:37:14 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:09:49.429 07:37:14 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:09:49.429 07:37:14 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:09:49.429 [2024-12-02 07:37:15.039849] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.687 07:37:15 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:09:49.945 07:37:15 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:09:49.945 [2024-12-02 07:37:15.563971] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:09:49.945 [2024-12-02 07:37:15.564181] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.204 07:37:15 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:09:50.462 malloc0 00:09:50.462 07:37:15 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:50.462 07:37:16 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:09:50.720 07:37:16 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:09:50.720 07:37:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:50.720 07:37:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:09:50.720 07:37:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:09:50.720 07:37:16 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:09:50.720 07:37:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:50.720 07:37:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:50.720 07:37:16 -- target/tls.sh@28 -- # bdevperf_pid=65014 00:09:50.720 07:37:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:50.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:50.720 07:37:16 -- target/tls.sh@31 -- # waitforlisten 65014 /var/tmp/bdevperf.sock 00:09:50.720 07:37:16 -- common/autotest_common.sh@829 -- # '[' -z 65014 ']' 00:09:50.720 07:37:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:50.720 07:37:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.720 07:37:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:50.720 07:37:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.720 07:37:16 -- common/autotest_common.sh@10 -- # set +x 00:09:50.721 [2024-12-02 07:37:16.283635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:50.721 [2024-12-02 07:37:16.283779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65014 ] 00:09:50.978 [2024-12-02 07:37:16.420607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.978 [2024-12-02 07:37:16.488766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.545 07:37:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.545 07:37:17 -- common/autotest_common.sh@862 -- # return 0 00:09:51.545 07:37:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:09:51.804 [2024-12-02 07:37:17.351140] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:09:51.804 TLSTESTn1 00:09:52.064 07:37:17 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:09:52.064 Running I/O for 10 seconds... 00:10:02.033 00:10:02.033 Latency(us) 00:10:02.033 [2024-12-02T07:37:27.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.033 [2024-12-02T07:37:27.657Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:02.033 Verification LBA range: start 0x0 length 0x2000 00:10:02.033 TLSTESTn1 : 10.01 6676.41 26.08 0.00 0.00 19143.07 3798.11 19779.96 00:10:02.033 [2024-12-02T07:37:27.657Z] =================================================================================================================== 00:10:02.033 [2024-12-02T07:37:27.657Z] Total : 6676.41 26.08 0.00 0.00 19143.07 3798.11 19779.96 00:10:02.033 0 00:10:02.033 07:37:27 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:02.033 07:37:27 -- target/tls.sh@45 -- # killprocess 65014 00:10:02.033 07:37:27 -- common/autotest_common.sh@936 -- # '[' -z 65014 ']' 00:10:02.033 07:37:27 -- common/autotest_common.sh@940 -- # kill -0 65014 00:10:02.033 07:37:27 -- common/autotest_common.sh@941 -- # uname 00:10:02.033 07:37:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:02.033 07:37:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65014 00:10:02.033 killing process with pid 65014 00:10:02.033 Received shutdown signal, test time was about 10.000000 seconds 00:10:02.033 00:10:02.033 Latency(us) 00:10:02.033 [2024-12-02T07:37:27.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.033 [2024-12-02T07:37:27.657Z] =================================================================================================================== 00:10:02.033 [2024-12-02T07:37:27.657Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:02.033 07:37:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:02.033 07:37:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:02.033 07:37:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65014' 00:10:02.033 07:37:27 -- common/autotest_common.sh@955 -- # kill 65014 00:10:02.033 07:37:27 -- common/autotest_common.sh@960 -- # wait 65014 00:10:02.293 07:37:27 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:02.293 07:37:27 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:02.293 07:37:27 -- common/autotest_common.sh@650 -- # local es=0 00:10:02.293 07:37:27 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:02.293 07:37:27 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:02.293 07:37:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.293 07:37:27 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:02.293 07:37:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.293 07:37:27 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:02.293 07:37:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:02.293 07:37:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:02.293 07:37:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:02.293 07:37:27 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:02.293 07:37:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:02.293 07:37:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:02.293 07:37:27 -- target/tls.sh@28 -- # bdevperf_pid=65149 00:10:02.293 07:37:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:02.293 07:37:27 -- target/tls.sh@31 -- # waitforlisten 65149 /var/tmp/bdevperf.sock 00:10:02.293 07:37:27 -- common/autotest_common.sh@829 -- # '[' -z 65149 ']' 00:10:02.293 07:37:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:02.293 07:37:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.293 07:37:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:02.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:02.293 07:37:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.293 07:37:27 -- common/autotest_common.sh@10 -- # set +x 00:10:02.293 [2024-12-02 07:37:27.827312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:02.293 [2024-12-02 07:37:27.827598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65149 ] 00:10:02.552 [2024-12-02 07:37:27.960582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.552 [2024-12-02 07:37:28.015500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.489 07:37:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.489 07:37:28 -- common/autotest_common.sh@862 -- # return 0 00:10:03.489 07:37:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:03.489 [2024-12-02 07:37:29.045999] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:03.489 [2024-12-02 07:37:29.046610] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:10:03.489 request: 00:10:03.489 { 00:10:03.489 "name": "TLSTEST", 00:10:03.489 "trtype": "tcp", 00:10:03.489 "traddr": "10.0.0.2", 00:10:03.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.489 "adrfam": "ipv4", 00:10:03.489 "trsvcid": "4420", 00:10:03.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.489 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:03.489 "method": "bdev_nvme_attach_controller", 00:10:03.489 "req_id": 1 00:10:03.489 } 00:10:03.489 Got JSON-RPC error response 00:10:03.489 response: 00:10:03.489 { 00:10:03.489 "code": -22, 00:10:03.489 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:03.489 } 00:10:03.489 07:37:29 -- target/tls.sh@36 -- # killprocess 65149 00:10:03.489 07:37:29 -- common/autotest_common.sh@936 -- # '[' -z 65149 ']' 00:10:03.489 07:37:29 -- common/autotest_common.sh@940 -- # kill -0 65149 00:10:03.489 07:37:29 -- common/autotest_common.sh@941 -- # uname 00:10:03.489 07:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.489 07:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65149 00:10:03.489 killing process with pid 65149 00:10:03.489 Received shutdown signal, test time was about 10.000000 seconds 00:10:03.489 00:10:03.489 Latency(us) 00:10:03.489 [2024-12-02T07:37:29.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.489 [2024-12-02T07:37:29.113Z] =================================================================================================================== 00:10:03.489 [2024-12-02T07:37:29.114Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:03.490 07:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:03.490 07:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:03.490 07:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65149' 00:10:03.490 07:37:29 -- common/autotest_common.sh@955 -- # kill 65149 00:10:03.490 07:37:29 -- common/autotest_common.sh@960 -- # wait 65149 00:10:03.749 07:37:29 -- target/tls.sh@37 -- # return 1 00:10:03.749 07:37:29 -- common/autotest_common.sh@653 -- # es=1 00:10:03.749 07:37:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:03.749 07:37:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:03.749 07:37:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:03.749 07:37:29 -- target/tls.sh@183 -- # killprocess 64964 00:10:03.749 07:37:29 -- common/autotest_common.sh@936 -- # '[' -z 64964 ']' 00:10:03.749 07:37:29 -- common/autotest_common.sh@940 -- # kill -0 64964 00:10:03.749 07:37:29 -- common/autotest_common.sh@941 -- # uname 00:10:03.749 07:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.749 07:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64964 00:10:03.749 killing process with pid 64964 00:10:03.749 07:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:03.749 07:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:03.749 07:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64964' 00:10:03.749 07:37:29 -- common/autotest_common.sh@955 -- # kill 64964 00:10:03.749 07:37:29 -- common/autotest_common.sh@960 -- # wait 64964 00:10:04.009 07:37:29 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:10:04.009 07:37:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:04.009 07:37:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.009 07:37:29 -- common/autotest_common.sh@10 -- # set +x 00:10:04.009 07:37:29 -- nvmf/common.sh@469 -- # nvmfpid=65181 00:10:04.009 07:37:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:04.009 07:37:29 -- nvmf/common.sh@470 -- # waitforlisten 65181 00:10:04.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.009 07:37:29 -- common/autotest_common.sh@829 -- # '[' -z 65181 ']' 00:10:04.009 07:37:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.009 07:37:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.009 07:37:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.009 07:37:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.009 07:37:29 -- common/autotest_common.sh@10 -- # set +x 00:10:04.009 [2024-12-02 07:37:29.522623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.009 [2024-12-02 07:37:29.522909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.269 [2024-12-02 07:37:29.662379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.269 [2024-12-02 07:37:29.709644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:04.269 [2024-12-02 07:37:29.710020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.269 [2024-12-02 07:37:29.710093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.269 [2024-12-02 07:37:29.710226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.269 [2024-12-02 07:37:29.710284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.837 07:37:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:04.837 07:37:30 -- common/autotest_common.sh@862 -- # return 0 00:10:04.837 07:37:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:04.837 07:37:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:04.837 07:37:30 -- common/autotest_common.sh@10 -- # set +x 00:10:04.837 07:37:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.837 07:37:30 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:04.837 07:37:30 -- common/autotest_common.sh@650 -- # local es=0 00:10:04.837 07:37:30 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:04.837 07:37:30 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:10:04.837 07:37:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.837 07:37:30 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:10:04.837 07:37:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.837 07:37:30 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:04.837 07:37:30 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:04.837 07:37:30 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:05.096 [2024-12-02 07:37:30.656004] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.096 07:37:30 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:05.355 07:37:30 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:05.614 [2024-12-02 07:37:31.040041] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:05.614 [2024-12-02 07:37:31.040219] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.614 07:37:31 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:05.874 malloc0 00:10:05.874 07:37:31 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:06.134 07:37:31 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:06.134 [2024-12-02 07:37:31.689601] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:10:06.134 [2024-12-02 07:37:31.689635] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:10:06.134 [2024-12-02 07:37:31.689651] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:10:06.134 request: 00:10:06.134 { 00:10:06.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.134 "host": "nqn.2016-06.io.spdk:host1", 00:10:06.134 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:06.134 "method": "nvmf_subsystem_add_host", 00:10:06.134 "req_id": 1 00:10:06.134 } 00:10:06.134 Got JSON-RPC error response 00:10:06.134 response: 00:10:06.134 { 00:10:06.134 "code": -32603, 00:10:06.134 "message": "Internal error" 00:10:06.134 } 00:10:06.134 07:37:31 -- common/autotest_common.sh@653 -- # es=1 00:10:06.134 07:37:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:06.134 07:37:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:06.134 07:37:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:06.134 07:37:31 -- target/tls.sh@189 -- # killprocess 65181 00:10:06.134 07:37:31 -- common/autotest_common.sh@936 -- # '[' -z 65181 ']' 00:10:06.134 07:37:31 -- common/autotest_common.sh@940 -- # kill -0 65181 00:10:06.134 07:37:31 -- common/autotest_common.sh@941 -- # uname 00:10:06.134 07:37:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:06.134 07:37:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65181 00:10:06.134 killing process with pid 65181 00:10:06.134 07:37:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:06.134 07:37:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:06.134 07:37:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65181' 00:10:06.134 07:37:31 -- common/autotest_common.sh@955 -- # kill 65181 00:10:06.134 07:37:31 -- common/autotest_common.sh@960 -- # wait 65181 00:10:06.394 07:37:31 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:06.394 07:37:31 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:10:06.394 07:37:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:06.394 07:37:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.394 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:10:06.394 07:37:31 -- nvmf/common.sh@469 -- # nvmfpid=65244 00:10:06.394 07:37:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.394 07:37:31 -- nvmf/common.sh@470 -- # waitforlisten 65244 00:10:06.394 07:37:31 -- common/autotest_common.sh@829 -- # '[' -z 65244 ']' 00:10:06.394 07:37:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.394 07:37:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.394 07:37:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.394 07:37:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.394 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:10:06.394 [2024-12-02 07:37:31.958527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.394 [2024-12-02 07:37:31.959078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.654 [2024-12-02 07:37:32.082399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.654 [2024-12-02 07:37:32.131455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:06.654 [2024-12-02 07:37:32.131804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.654 [2024-12-02 07:37:32.131824] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.654 [2024-12-02 07:37:32.131833] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.654 [2024-12-02 07:37:32.131863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.593 07:37:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.593 07:37:32 -- common/autotest_common.sh@862 -- # return 0 00:10:07.593 07:37:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:07.593 07:37:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.593 07:37:32 -- common/autotest_common.sh@10 -- # set +x 00:10:07.593 07:37:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.593 07:37:32 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:07.593 07:37:32 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:07.593 07:37:32 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:07.593 [2024-12-02 07:37:33.190984] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.593 07:37:33 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:07.852 07:37:33 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:08.111 [2024-12-02 07:37:33.591061] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:08.111 [2024-12-02 07:37:33.591387] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.111 07:37:33 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:08.371 malloc0 00:10:08.371 07:37:33 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:08.630 07:37:34 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:08.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.890 07:37:34 -- target/tls.sh@197 -- # bdevperf_pid=65293 00:10:08.890 07:37:34 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:08.890 07:37:34 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:08.890 07:37:34 -- target/tls.sh@200 -- # waitforlisten 65293 /var/tmp/bdevperf.sock 00:10:08.890 07:37:34 -- common/autotest_common.sh@829 -- # '[' -z 65293 ']' 00:10:08.890 07:37:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.890 07:37:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.890 07:37:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.890 07:37:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.890 07:37:34 -- common/autotest_common.sh@10 -- # set +x 00:10:08.890 [2024-12-02 07:37:34.349456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.890 [2024-12-02 07:37:34.349698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65293 ] 00:10:08.890 [2024-12-02 07:37:34.481414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.149 [2024-12-02 07:37:34.550260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.717 07:37:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.717 07:37:35 -- common/autotest_common.sh@862 -- # return 0 00:10:09.717 07:37:35 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:09.975 [2024-12-02 07:37:35.405269] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:09.975 TLSTESTn1 00:10:09.975 07:37:35 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:10.234 07:37:35 -- target/tls.sh@205 -- # tgtconf='{ 00:10:10.234 "subsystems": [ 00:10:10.234 { 00:10:10.234 "subsystem": "iobuf", 00:10:10.234 "config": [ 00:10:10.234 { 00:10:10.234 "method": "iobuf_set_options", 00:10:10.234 "params": { 00:10:10.234 "small_pool_count": 8192, 00:10:10.234 "large_pool_count": 1024, 00:10:10.234 "small_bufsize": 8192, 00:10:10.234 "large_bufsize": 135168 00:10:10.234 } 00:10:10.234 } 00:10:10.234 ] 00:10:10.234 }, 00:10:10.234 { 00:10:10.234 "subsystem": "sock", 00:10:10.234 "config": [ 00:10:10.234 { 00:10:10.234 "method": "sock_impl_set_options", 00:10:10.234 "params": { 00:10:10.234 "impl_name": "uring", 00:10:10.234 "recv_buf_size": 2097152, 00:10:10.234 "send_buf_size": 2097152, 00:10:10.234 "enable_recv_pipe": true, 00:10:10.234 "enable_quickack": false, 00:10:10.234 "enable_placement_id": 0, 00:10:10.234 "enable_zerocopy_send_server": false, 00:10:10.234 "enable_zerocopy_send_client": false, 00:10:10.234 "zerocopy_threshold": 0, 00:10:10.234 "tls_version": 0, 00:10:10.234 "enable_ktls": false 00:10:10.234 } 00:10:10.234 }, 00:10:10.234 { 00:10:10.234 "method": "sock_impl_set_options", 00:10:10.234 "params": { 00:10:10.234 "impl_name": "posix", 00:10:10.234 "recv_buf_size": 2097152, 00:10:10.234 "send_buf_size": 2097152, 00:10:10.234 "enable_recv_pipe": true, 00:10:10.234 "enable_quickack": false, 00:10:10.234 "enable_placement_id": 0, 00:10:10.234 "enable_zerocopy_send_server": true, 00:10:10.234 "enable_zerocopy_send_client": false, 00:10:10.234 "zerocopy_threshold": 0, 00:10:10.234 "tls_version": 0, 00:10:10.234 "enable_ktls": false 00:10:10.234 } 00:10:10.234 }, 00:10:10.234 { 00:10:10.234 "method": "sock_impl_set_options", 00:10:10.234 "params": { 00:10:10.234 "impl_name": "ssl", 00:10:10.234 "recv_buf_size": 4096, 00:10:10.234 "send_buf_size": 4096, 00:10:10.234 "enable_recv_pipe": true, 00:10:10.234 "enable_quickack": false, 00:10:10.234 "enable_placement_id": 0, 00:10:10.234 "enable_zerocopy_send_server": true, 00:10:10.234 "enable_zerocopy_send_client": false, 00:10:10.234 "zerocopy_threshold": 0, 00:10:10.234 "tls_version": 0, 00:10:10.234 "enable_ktls": false 00:10:10.234 } 00:10:10.234 } 00:10:10.234 ] 00:10:10.234 }, 00:10:10.234 { 00:10:10.234 "subsystem": "vmd", 00:10:10.234 "config": [] 00:10:10.234 }, 00:10:10.234 { 00:10:10.234 "subsystem": "accel", 00:10:10.234 "config": [ 00:10:10.234 { 00:10:10.234 "method": "accel_set_options", 00:10:10.234 "params": { 00:10:10.235 "small_cache_size": 128, 00:10:10.235 "large_cache_size": 16, 00:10:10.235 "task_count": 2048, 00:10:10.235 "sequence_count": 2048, 00:10:10.235 "buf_count": 2048 00:10:10.235 } 00:10:10.235 } 00:10:10.235 ] 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "subsystem": "bdev", 00:10:10.235 "config": [ 00:10:10.235 { 00:10:10.235 "method": "bdev_set_options", 00:10:10.235 "params": { 00:10:10.235 "bdev_io_pool_size": 65535, 00:10:10.235 "bdev_io_cache_size": 256, 00:10:10.235 "bdev_auto_examine": true, 00:10:10.235 "iobuf_small_cache_size": 128, 00:10:10.235 "iobuf_large_cache_size": 16 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "bdev_raid_set_options", 00:10:10.235 "params": { 00:10:10.235 "process_window_size_kb": 1024 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "bdev_iscsi_set_options", 00:10:10.235 "params": { 00:10:10.235 "timeout_sec": 30 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "bdev_nvme_set_options", 00:10:10.235 "params": { 00:10:10.235 "action_on_timeout": "none", 00:10:10.235 "timeout_us": 0, 00:10:10.235 "timeout_admin_us": 0, 00:10:10.235 "keep_alive_timeout_ms": 10000, 00:10:10.235 "transport_retry_count": 4, 00:10:10.235 "arbitration_burst": 0, 00:10:10.235 "low_priority_weight": 0, 00:10:10.235 "medium_priority_weight": 0, 00:10:10.235 "high_priority_weight": 0, 00:10:10.235 "nvme_adminq_poll_period_us": 10000, 00:10:10.235 "nvme_ioq_poll_period_us": 0, 00:10:10.235 "io_queue_requests": 0, 00:10:10.235 "delay_cmd_submit": true, 00:10:10.235 "bdev_retry_count": 3, 00:10:10.235 "transport_ack_timeout": 0, 00:10:10.235 "ctrlr_loss_timeout_sec": 0, 00:10:10.235 "reconnect_delay_sec": 0, 00:10:10.235 "fast_io_fail_timeout_sec": 0, 00:10:10.235 "generate_uuids": false, 00:10:10.235 "transport_tos": 0, 00:10:10.235 "io_path_stat": false, 00:10:10.235 "allow_accel_sequence": false 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "bdev_nvme_set_hotplug", 00:10:10.235 "params": { 00:10:10.235 "period_us": 100000, 00:10:10.235 "enable": false 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "bdev_malloc_create", 00:10:10.235 "params": { 00:10:10.235 "name": "malloc0", 00:10:10.235 "num_blocks": 8192, 00:10:10.235 "block_size": 4096, 00:10:10.235 "physical_block_size": 4096, 00:10:10.235 "uuid": "2a95f200-3212-47e4-8d5b-fad8c818a84f", 00:10:10.235 "optimal_io_boundary": 0 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "bdev_wait_for_examine" 00:10:10.235 } 00:10:10.235 ] 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "subsystem": "nbd", 00:10:10.235 "config": [] 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "subsystem": "scheduler", 00:10:10.235 "config": [ 00:10:10.235 { 00:10:10.235 "method": "framework_set_scheduler", 00:10:10.235 "params": { 00:10:10.235 "name": "static" 00:10:10.235 } 00:10:10.235 } 00:10:10.235 ] 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "subsystem": "nvmf", 00:10:10.235 "config": [ 00:10:10.235 { 00:10:10.235 "method": "nvmf_set_config", 00:10:10.235 "params": { 00:10:10.235 "discovery_filter": "match_any", 00:10:10.235 "admin_cmd_passthru": { 00:10:10.235 "identify_ctrlr": false 00:10:10.235 } 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "nvmf_set_max_subsystems", 00:10:10.235 "params": { 00:10:10.235 "max_subsystems": 1024 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "nvmf_set_crdt", 00:10:10.235 "params": { 00:10:10.235 "crdt1": 0, 00:10:10.235 "crdt2": 0, 00:10:10.235 "crdt3": 0 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "nvmf_create_transport", 00:10:10.235 "params": { 00:10:10.235 "trtype": "TCP", 00:10:10.235 "max_queue_depth": 128, 00:10:10.235 "max_io_qpairs_per_ctrlr": 127, 00:10:10.235 "in_capsule_data_size": 4096, 00:10:10.235 "max_io_size": 131072, 00:10:10.235 "io_unit_size": 131072, 00:10:10.235 "max_aq_depth": 128, 00:10:10.235 "num_shared_buffers": 511, 00:10:10.235 "buf_cache_size": 4294967295, 00:10:10.235 "dif_insert_or_strip": false, 00:10:10.235 "zcopy": false, 00:10:10.235 "c2h_success": false, 00:10:10.235 "sock_priority": 0, 00:10:10.235 "abort_timeout_sec": 1 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "nvmf_create_subsystem", 00:10:10.235 "params": { 00:10:10.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.235 "allow_any_host": false, 00:10:10.235 "serial_number": "SPDK00000000000001", 00:10:10.235 "model_number": "SPDK bdev Controller", 00:10:10.235 "max_namespaces": 10, 00:10:10.235 "min_cntlid": 1, 00:10:10.235 "max_cntlid": 65519, 00:10:10.235 "ana_reporting": false 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "nvmf_subsystem_add_host", 00:10:10.235 "params": { 00:10:10.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.235 "host": "nqn.2016-06.io.spdk:host1", 00:10:10.235 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "nvmf_subsystem_add_ns", 00:10:10.235 "params": { 00:10:10.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.235 "namespace": { 00:10:10.235 "nsid": 1, 00:10:10.235 "bdev_name": "malloc0", 00:10:10.235 "nguid": "2A95F200321247E48D5BFAD8C818A84F", 00:10:10.235 "uuid": "2a95f200-3212-47e4-8d5b-fad8c818a84f" 00:10:10.235 } 00:10:10.235 } 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "method": "nvmf_subsystem_add_listener", 00:10:10.235 "params": { 00:10:10.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.235 "listen_address": { 00:10:10.235 "trtype": "TCP", 00:10:10.235 "adrfam": "IPv4", 00:10:10.235 "traddr": "10.0.0.2", 00:10:10.235 "trsvcid": "4420" 00:10:10.235 }, 00:10:10.235 "secure_channel": true 00:10:10.235 } 00:10:10.235 } 00:10:10.235 ] 00:10:10.235 } 00:10:10.235 ] 00:10:10.235 }' 00:10:10.235 07:37:35 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:10:10.495 07:37:36 -- target/tls.sh@206 -- # bdevperfconf='{ 00:10:10.495 "subsystems": [ 00:10:10.495 { 00:10:10.495 "subsystem": "iobuf", 00:10:10.495 "config": [ 00:10:10.495 { 00:10:10.495 "method": "iobuf_set_options", 00:10:10.495 "params": { 00:10:10.495 "small_pool_count": 8192, 00:10:10.495 "large_pool_count": 1024, 00:10:10.495 "small_bufsize": 8192, 00:10:10.495 "large_bufsize": 135168 00:10:10.495 } 00:10:10.495 } 00:10:10.495 ] 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "subsystem": "sock", 00:10:10.495 "config": [ 00:10:10.495 { 00:10:10.495 "method": "sock_impl_set_options", 00:10:10.495 "params": { 00:10:10.495 "impl_name": "uring", 00:10:10.495 "recv_buf_size": 2097152, 00:10:10.495 "send_buf_size": 2097152, 00:10:10.495 "enable_recv_pipe": true, 00:10:10.495 "enable_quickack": false, 00:10:10.495 "enable_placement_id": 0, 00:10:10.495 "enable_zerocopy_send_server": false, 00:10:10.495 "enable_zerocopy_send_client": false, 00:10:10.495 "zerocopy_threshold": 0, 00:10:10.495 "tls_version": 0, 00:10:10.495 "enable_ktls": false 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "sock_impl_set_options", 00:10:10.495 "params": { 00:10:10.495 "impl_name": "posix", 00:10:10.495 "recv_buf_size": 2097152, 00:10:10.495 "send_buf_size": 2097152, 00:10:10.495 "enable_recv_pipe": true, 00:10:10.495 "enable_quickack": false, 00:10:10.495 "enable_placement_id": 0, 00:10:10.495 "enable_zerocopy_send_server": true, 00:10:10.495 "enable_zerocopy_send_client": false, 00:10:10.495 "zerocopy_threshold": 0, 00:10:10.495 "tls_version": 0, 00:10:10.495 "enable_ktls": false 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "sock_impl_set_options", 00:10:10.495 "params": { 00:10:10.495 "impl_name": "ssl", 00:10:10.495 "recv_buf_size": 4096, 00:10:10.495 "send_buf_size": 4096, 00:10:10.495 "enable_recv_pipe": true, 00:10:10.495 "enable_quickack": false, 00:10:10.495 "enable_placement_id": 0, 00:10:10.495 "enable_zerocopy_send_server": true, 00:10:10.495 "enable_zerocopy_send_client": false, 00:10:10.495 "zerocopy_threshold": 0, 00:10:10.495 "tls_version": 0, 00:10:10.495 "enable_ktls": false 00:10:10.495 } 00:10:10.495 } 00:10:10.495 ] 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "subsystem": "vmd", 00:10:10.495 "config": [] 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "subsystem": "accel", 00:10:10.495 "config": [ 00:10:10.495 { 00:10:10.495 "method": "accel_set_options", 00:10:10.495 "params": { 00:10:10.495 "small_cache_size": 128, 00:10:10.495 "large_cache_size": 16, 00:10:10.495 "task_count": 2048, 00:10:10.495 "sequence_count": 2048, 00:10:10.495 "buf_count": 2048 00:10:10.495 } 00:10:10.495 } 00:10:10.495 ] 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "subsystem": "bdev", 00:10:10.495 "config": [ 00:10:10.495 { 00:10:10.495 "method": "bdev_set_options", 00:10:10.495 "params": { 00:10:10.495 "bdev_io_pool_size": 65535, 00:10:10.495 "bdev_io_cache_size": 256, 00:10:10.495 "bdev_auto_examine": true, 00:10:10.495 "iobuf_small_cache_size": 128, 00:10:10.495 "iobuf_large_cache_size": 16 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "bdev_raid_set_options", 00:10:10.495 "params": { 00:10:10.495 "process_window_size_kb": 1024 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "bdev_iscsi_set_options", 00:10:10.495 "params": { 00:10:10.495 "timeout_sec": 30 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "bdev_nvme_set_options", 00:10:10.495 "params": { 00:10:10.495 "action_on_timeout": "none", 00:10:10.495 "timeout_us": 0, 00:10:10.495 "timeout_admin_us": 0, 00:10:10.495 "keep_alive_timeout_ms": 10000, 00:10:10.495 "transport_retry_count": 4, 00:10:10.495 "arbitration_burst": 0, 00:10:10.495 "low_priority_weight": 0, 00:10:10.495 "medium_priority_weight": 0, 00:10:10.495 "high_priority_weight": 0, 00:10:10.495 "nvme_adminq_poll_period_us": 10000, 00:10:10.495 "nvme_ioq_poll_period_us": 0, 00:10:10.495 "io_queue_requests": 512, 00:10:10.495 "delay_cmd_submit": true, 00:10:10.495 "bdev_retry_count": 3, 00:10:10.495 "transport_ack_timeout": 0, 00:10:10.495 "ctrlr_loss_timeout_sec": 0, 00:10:10.495 "reconnect_delay_sec": 0, 00:10:10.495 "fast_io_fail_timeout_sec": 0, 00:10:10.495 "generate_uuids": false, 00:10:10.495 "transport_tos": 0, 00:10:10.495 "io_path_stat": false, 00:10:10.495 "allow_accel_sequence": false 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "bdev_nvme_attach_controller", 00:10:10.495 "params": { 00:10:10.495 "name": "TLSTEST", 00:10:10.495 "trtype": "TCP", 00:10:10.495 "adrfam": "IPv4", 00:10:10.495 "traddr": "10.0.0.2", 00:10:10.495 "trsvcid": "4420", 00:10:10.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.495 "prchk_reftag": false, 00:10:10.495 "prchk_guard": false, 00:10:10.495 "ctrlr_loss_timeout_sec": 0, 00:10:10.495 "reconnect_delay_sec": 0, 00:10:10.495 "fast_io_fail_timeout_sec": 0, 00:10:10.495 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:10.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.495 "hdgst": false, 00:10:10.495 "ddgst": false 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "bdev_nvme_set_hotplug", 00:10:10.495 "params": { 00:10:10.495 "period_us": 100000, 00:10:10.495 "enable": false 00:10:10.495 } 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "method": "bdev_wait_for_examine" 00:10:10.495 } 00:10:10.495 ] 00:10:10.495 }, 00:10:10.495 { 00:10:10.495 "subsystem": "nbd", 00:10:10.495 "config": [] 00:10:10.495 } 00:10:10.495 ] 00:10:10.495 }' 00:10:10.495 07:37:36 -- target/tls.sh@208 -- # killprocess 65293 00:10:10.495 07:37:36 -- common/autotest_common.sh@936 -- # '[' -z 65293 ']' 00:10:10.495 07:37:36 -- common/autotest_common.sh@940 -- # kill -0 65293 00:10:10.495 07:37:36 -- common/autotest_common.sh@941 -- # uname 00:10:10.495 07:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:10.495 07:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65293 00:10:10.495 killing process with pid 65293 00:10:10.495 Received shutdown signal, test time was about 10.000000 seconds 00:10:10.495 00:10:10.495 Latency(us) 00:10:10.495 [2024-12-02T07:37:36.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.495 [2024-12-02T07:37:36.119Z] =================================================================================================================== 00:10:10.495 [2024-12-02T07:37:36.119Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:10.495 07:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:10.495 07:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:10.495 07:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65293' 00:10:10.495 07:37:36 -- common/autotest_common.sh@955 -- # kill 65293 00:10:10.495 07:37:36 -- common/autotest_common.sh@960 -- # wait 65293 00:10:10.754 07:37:36 -- target/tls.sh@209 -- # killprocess 65244 00:10:10.754 07:37:36 -- common/autotest_common.sh@936 -- # '[' -z 65244 ']' 00:10:10.754 07:37:36 -- common/autotest_common.sh@940 -- # kill -0 65244 00:10:10.754 07:37:36 -- common/autotest_common.sh@941 -- # uname 00:10:10.754 07:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:10.754 07:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65244 00:10:10.754 killing process with pid 65244 00:10:10.754 07:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:10.754 07:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:10.754 07:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65244' 00:10:10.754 07:37:36 -- common/autotest_common.sh@955 -- # kill 65244 00:10:10.755 07:37:36 -- common/autotest_common.sh@960 -- # wait 65244 00:10:11.013 07:37:36 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:10:11.013 07:37:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:11.013 07:37:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.013 07:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:11.013 07:37:36 -- target/tls.sh@212 -- # echo '{ 00:10:11.013 "subsystems": [ 00:10:11.013 { 00:10:11.013 "subsystem": "iobuf", 00:10:11.013 "config": [ 00:10:11.013 { 00:10:11.013 "method": "iobuf_set_options", 00:10:11.013 "params": { 00:10:11.013 "small_pool_count": 8192, 00:10:11.013 "large_pool_count": 1024, 00:10:11.013 "small_bufsize": 8192, 00:10:11.013 "large_bufsize": 135168 00:10:11.013 } 00:10:11.013 } 00:10:11.013 ] 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "subsystem": "sock", 00:10:11.013 "config": [ 00:10:11.013 { 00:10:11.013 "method": "sock_impl_set_options", 00:10:11.013 "params": { 00:10:11.013 "impl_name": "uring", 00:10:11.013 "recv_buf_size": 2097152, 00:10:11.013 "send_buf_size": 2097152, 00:10:11.013 "enable_recv_pipe": true, 00:10:11.013 "enable_quickack": false, 00:10:11.013 "enable_placement_id": 0, 00:10:11.013 "enable_zerocopy_send_server": false, 00:10:11.013 "enable_zerocopy_send_client": false, 00:10:11.013 "zerocopy_threshold": 0, 00:10:11.013 "tls_version": 0, 00:10:11.013 "enable_ktls": false 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "sock_impl_set_options", 00:10:11.013 "params": { 00:10:11.013 "impl_name": "posix", 00:10:11.013 "recv_buf_size": 2097152, 00:10:11.013 "send_buf_size": 2097152, 00:10:11.013 "enable_recv_pipe": true, 00:10:11.013 "enable_quickack": false, 00:10:11.013 "enable_placement_id": 0, 00:10:11.013 "enable_zerocopy_send_server": true, 00:10:11.013 "enable_zerocopy_send_client": false, 00:10:11.013 "zerocopy_threshold": 0, 00:10:11.013 "tls_version": 0, 00:10:11.013 "enable_ktls": false 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "sock_impl_set_options", 00:10:11.013 "params": { 00:10:11.013 "impl_name": "ssl", 00:10:11.013 "recv_buf_size": 4096, 00:10:11.013 "send_buf_size": 4096, 00:10:11.013 "enable_recv_pipe": true, 00:10:11.013 "enable_quickack": false, 00:10:11.013 "enable_placement_id": 0, 00:10:11.013 "enable_zerocopy_send_server": true, 00:10:11.013 "enable_zerocopy_send_client": false, 00:10:11.013 "zerocopy_threshold": 0, 00:10:11.013 "tls_version": 0, 00:10:11.013 "enable_ktls": false 00:10:11.013 } 00:10:11.013 } 00:10:11.013 ] 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "subsystem": "vmd", 00:10:11.013 "config": [] 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "subsystem": "accel", 00:10:11.013 "config": [ 00:10:11.013 { 00:10:11.013 "method": "accel_set_options", 00:10:11.013 "params": { 00:10:11.013 "small_cache_size": 128, 00:10:11.013 "large_cache_size": 16, 00:10:11.013 "task_count": 2048, 00:10:11.013 "sequence_count": 2048, 00:10:11.013 "buf_count": 2048 00:10:11.013 } 00:10:11.013 } 00:10:11.013 ] 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "subsystem": "bdev", 00:10:11.013 "config": [ 00:10:11.013 { 00:10:11.013 "method": "bdev_set_options", 00:10:11.013 "params": { 00:10:11.013 "bdev_io_pool_size": 65535, 00:10:11.013 "bdev_io_cache_size": 256, 00:10:11.013 "bdev_auto_examine": true, 00:10:11.013 "iobuf_small_cache_size": 128, 00:10:11.013 "iobuf_large_cache_size": 16 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "bdev_raid_set_options", 00:10:11.013 "params": { 00:10:11.013 "process_window_size_kb": 1024 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "bdev_iscsi_set_options", 00:10:11.013 "params": { 00:10:11.013 "timeout_sec": 30 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "bdev_nvme_set_options", 00:10:11.013 "params": { 00:10:11.013 "action_on_timeout": "none", 00:10:11.013 "timeout_us": 0, 00:10:11.013 "timeout_admin_us": 0, 00:10:11.013 "keep_alive_timeout_ms": 10000, 00:10:11.013 "transport_retry_count": 4, 00:10:11.013 "arbitration_burst": 0, 00:10:11.013 "low_priority_weight": 0, 00:10:11.013 "medium_priority_weight": 0, 00:10:11.013 "high_priority_weight": 0, 00:10:11.013 "nvme_adminq_poll_period_us": 10000, 00:10:11.013 "nvme_ioq_poll_period_us": 0, 00:10:11.013 "io_queue_requests": 0, 00:10:11.013 "delay_cmd_submit": true, 00:10:11.013 "bdev_retry_count": 3, 00:10:11.013 "transport_ack_timeout": 0, 00:10:11.013 "ctrlr_loss_timeout_sec": 0, 00:10:11.013 "reconnect_delay_sec": 0, 00:10:11.013 "fast_io_fail_timeout_sec": 0, 00:10:11.013 "generate_uuids": false, 00:10:11.013 "transport_tos": 0, 00:10:11.013 "io_path_stat": false, 00:10:11.013 "allow_accel_sequence": false 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "bdev_nvme_set_hotplug", 00:10:11.013 "params": { 00:10:11.013 "period_us": 100000, 00:10:11.013 "enable": false 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "bdev_malloc_create", 00:10:11.013 "params": { 00:10:11.013 "name": "malloc0", 00:10:11.013 "num_blocks": 8192, 00:10:11.013 "block_size": 4096, 00:10:11.013 "physical_block_size": 4096, 00:10:11.013 "uuid": "2a95f200-3212-47e4-8d5b-fad8c818a84f", 00:10:11.013 "optimal_io_boundary": 0 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "bdev_wait_for_examine" 00:10:11.013 } 00:10:11.013 ] 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "subsystem": "nbd", 00:10:11.013 "config": [] 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "subsystem": "scheduler", 00:10:11.013 "config": [ 00:10:11.013 { 00:10:11.013 "method": "framework_set_scheduler", 00:10:11.013 "params": { 00:10:11.013 "name": "static" 00:10:11.013 } 00:10:11.013 } 00:10:11.013 ] 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "subsystem": "nvmf", 00:10:11.013 "config": [ 00:10:11.013 { 00:10:11.013 "method": "nvmf_set_config", 00:10:11.013 "params": { 00:10:11.013 "discovery_filter": "match_any", 00:10:11.013 "admin_cmd_passthru": { 00:10:11.013 "identify_ctrlr": false 00:10:11.013 } 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "nvmf_set_max_subsystems", 00:10:11.013 "params": { 00:10:11.013 "max_subsystems": 1024 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "nvmf_set_crdt", 00:10:11.013 "params": { 00:10:11.013 "crdt1": 0, 00:10:11.013 "crdt2": 0, 00:10:11.013 "crdt3": 0 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "nvmf_create_transport", 00:10:11.013 "params": { 00:10:11.013 "trtype": "TCP", 00:10:11.013 "max_queue_depth": 128, 00:10:11.013 "max_io_qpairs_per_ctrlr": 127, 00:10:11.013 "in_capsule_data_size": 4096, 00:10:11.013 "max_io_size": 131072, 00:10:11.013 "io_unit_size": 131072, 00:10:11.013 "max_aq_depth": 128, 00:10:11.013 "num_shared_buffers": 511, 00:10:11.013 "buf_cache_size": 4294967295, 00:10:11.013 "dif_insert_or_strip": false, 00:10:11.013 "zcopy": false, 00:10:11.013 "c2h_success": false, 00:10:11.013 "sock_priority": 0, 00:10:11.013 "abort_timeout_sec": 1 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "nvmf_create_subsystem", 00:10:11.013 "params": { 00:10:11.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.013 "allow_any_host": false, 00:10:11.013 "serial_number": "SPDK00000000000001", 00:10:11.013 "model_number": "SPDK bdev Controller", 00:10:11.013 "max_namespaces": 10, 00:10:11.013 "min_cntlid": 1, 00:10:11.013 "max_cntlid": 65519, 00:10:11.013 "ana_reporting": false 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "nvmf_subsystem_add_host", 00:10:11.013 "params": { 00:10:11.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.013 "host": "nqn.2016-06.io.spdk:host1", 00:10:11.013 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:11.013 } 00:10:11.013 }, 00:10:11.013 { 00:10:11.013 "method": "nvmf_subsystem_add_ns", 00:10:11.013 "params": { 00:10:11.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.014 "namespace": { 00:10:11.014 "nsid": 1, 00:10:11.014 "bdev_name": "malloc0", 00:10:11.014 "nguid": "2A95F200321247E48D5BFAD8C818A84F", 00:10:11.014 "uuid": "2a95f200-3212-47e4-8d5b-fad8c818a84f" 00:10:11.014 } 00:10:11.014 } 00:10:11.014 }, 00:10:11.014 { 00:10:11.014 "method": "nvmf_subsystem_add_listener", 00:10:11.014 "params": { 00:10:11.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.014 "listen_address": { 00:10:11.014 "trtype": "TCP", 00:10:11.014 "adrfam": "IPv4", 00:10:11.014 "traddr": "10.0.0.2", 00:10:11.014 "trsvcid": "4420" 00:10:11.014 }, 00:10:11.014 "secure_channel": true 00:10:11.014 } 00:10:11.014 } 00:10:11.014 ] 00:10:11.014 } 00:10:11.014 ] 00:10:11.014 }' 00:10:11.014 07:37:36 -- nvmf/common.sh@469 -- # nvmfpid=65336 00:10:11.014 07:37:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:10:11.014 07:37:36 -- nvmf/common.sh@470 -- # waitforlisten 65336 00:10:11.014 07:37:36 -- common/autotest_common.sh@829 -- # '[' -z 65336 ']' 00:10:11.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.014 07:37:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.014 07:37:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.014 07:37:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.014 07:37:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.014 07:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:11.014 [2024-12-02 07:37:36.531911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:11.014 [2024-12-02 07:37:36.532194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.272 [2024-12-02 07:37:36.661470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.273 [2024-12-02 07:37:36.715639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:11.273 [2024-12-02 07:37:36.715814] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.273 [2024-12-02 07:37:36.715826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.273 [2024-12-02 07:37:36.715833] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.273 [2024-12-02 07:37:36.715870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.532 [2024-12-02 07:37:36.894616] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.532 [2024-12-02 07:37:36.926536] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:11.532 [2024-12-02 07:37:36.926857] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.791 07:37:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.791 07:37:37 -- common/autotest_common.sh@862 -- # return 0 00:10:11.791 07:37:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:11.791 07:37:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.791 07:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:12.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:12.050 07:37:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.050 07:37:37 -- target/tls.sh@216 -- # bdevperf_pid=65368 00:10:12.050 07:37:37 -- target/tls.sh@217 -- # waitforlisten 65368 /var/tmp/bdevperf.sock 00:10:12.050 07:37:37 -- common/autotest_common.sh@829 -- # '[' -z 65368 ']' 00:10:12.050 07:37:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:12.050 07:37:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.050 07:37:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:12.050 07:37:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.050 07:37:37 -- common/autotest_common.sh@10 -- # set +x 00:10:12.050 07:37:37 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:10:12.050 07:37:37 -- target/tls.sh@213 -- # echo '{ 00:10:12.050 "subsystems": [ 00:10:12.050 { 00:10:12.050 "subsystem": "iobuf", 00:10:12.050 "config": [ 00:10:12.050 { 00:10:12.050 "method": "iobuf_set_options", 00:10:12.050 "params": { 00:10:12.050 "small_pool_count": 8192, 00:10:12.050 "large_pool_count": 1024, 00:10:12.050 "small_bufsize": 8192, 00:10:12.050 "large_bufsize": 135168 00:10:12.050 } 00:10:12.050 } 00:10:12.050 ] 00:10:12.050 }, 00:10:12.050 { 00:10:12.050 "subsystem": "sock", 00:10:12.050 "config": [ 00:10:12.050 { 00:10:12.050 "method": "sock_impl_set_options", 00:10:12.050 "params": { 00:10:12.050 "impl_name": "uring", 00:10:12.050 "recv_buf_size": 2097152, 00:10:12.050 "send_buf_size": 2097152, 00:10:12.050 "enable_recv_pipe": true, 00:10:12.050 "enable_quickack": false, 00:10:12.050 "enable_placement_id": 0, 00:10:12.050 "enable_zerocopy_send_server": false, 00:10:12.050 "enable_zerocopy_send_client": false, 00:10:12.050 "zerocopy_threshold": 0, 00:10:12.050 "tls_version": 0, 00:10:12.050 "enable_ktls": false 00:10:12.050 } 00:10:12.050 }, 00:10:12.050 { 00:10:12.050 "method": "sock_impl_set_options", 00:10:12.050 "params": { 00:10:12.050 "impl_name": "posix", 00:10:12.050 "recv_buf_size": 2097152, 00:10:12.050 "send_buf_size": 2097152, 00:10:12.050 "enable_recv_pipe": true, 00:10:12.051 "enable_quickack": false, 00:10:12.051 "enable_placement_id": 0, 00:10:12.051 "enable_zerocopy_send_server": true, 00:10:12.051 "enable_zerocopy_send_client": false, 00:10:12.051 "zerocopy_threshold": 0, 00:10:12.051 "tls_version": 0, 00:10:12.051 "enable_ktls": false 00:10:12.051 } 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "method": "sock_impl_set_options", 00:10:12.051 "params": { 00:10:12.051 "impl_name": "ssl", 00:10:12.051 "recv_buf_size": 4096, 00:10:12.051 "send_buf_size": 4096, 00:10:12.051 "enable_recv_pipe": true, 00:10:12.051 "enable_quickack": false, 00:10:12.051 "enable_placement_id": 0, 00:10:12.051 "enable_zerocopy_send_server": true, 00:10:12.051 "enable_zerocopy_send_client": false, 00:10:12.051 "zerocopy_threshold": 0, 00:10:12.051 "tls_version": 0, 00:10:12.051 "enable_ktls": false 00:10:12.051 } 00:10:12.051 } 00:10:12.051 ] 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "subsystem": "vmd", 00:10:12.051 "config": [] 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "subsystem": "accel", 00:10:12.051 "config": [ 00:10:12.051 { 00:10:12.051 "method": "accel_set_options", 00:10:12.051 "params": { 00:10:12.051 "small_cache_size": 128, 00:10:12.051 "large_cache_size": 16, 00:10:12.051 "task_count": 2048, 00:10:12.051 "sequence_count": 2048, 00:10:12.051 "buf_count": 2048 00:10:12.051 } 00:10:12.051 } 00:10:12.051 ] 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "subsystem": "bdev", 00:10:12.051 "config": [ 00:10:12.051 { 00:10:12.051 "method": "bdev_set_options", 00:10:12.051 "params": { 00:10:12.051 "bdev_io_pool_size": 65535, 00:10:12.051 "bdev_io_cache_size": 256, 00:10:12.051 "bdev_auto_examine": true, 00:10:12.051 "iobuf_small_cache_size": 128, 00:10:12.051 "iobuf_large_cache_size": 16 00:10:12.051 } 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "method": "bdev_raid_set_options", 00:10:12.051 "params": { 00:10:12.051 "process_window_size_kb": 1024 00:10:12.051 } 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "method": "bdev_iscsi_set_options", 00:10:12.051 "params": { 00:10:12.051 "timeout_sec": 30 00:10:12.051 } 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "method": "bdev_nvme_set_options", 00:10:12.051 "params": { 00:10:12.051 "action_on_timeout": "none", 00:10:12.051 "timeout_us": 0, 00:10:12.051 "timeout_admin_us": 0, 00:10:12.051 "keep_alive_timeout_ms": 10000, 00:10:12.051 "transport_retry_count": 4, 00:10:12.051 "arbitration_burst": 0, 00:10:12.051 "low_priority_weight": 0, 00:10:12.051 "medium_priority_weight": 0, 00:10:12.051 "high_priority_weight": 0, 00:10:12.051 "nvme_adminq_poll_period_us": 10000, 00:10:12.051 "nvme_ioq_poll_period_us": 0, 00:10:12.051 "io_queue_requests": 512, 00:10:12.051 "delay_cmd_submit": true, 00:10:12.051 "bdev_retry_count": 3, 00:10:12.051 "transport_ack_timeout": 0, 00:10:12.051 "ctrlr_loss_timeout_sec": 0, 00:10:12.051 "reconnect_delay_sec": 0, 00:10:12.051 "fast_io_fail_timeout_sec": 0, 00:10:12.051 "generate_uuids": false, 00:10:12.051 "transport_tos": 0, 00:10:12.051 "io_path_stat": false, 00:10:12.051 "allow_accel_sequence": false 00:10:12.051 } 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "method": "bdev_nvme_attach_controller", 00:10:12.051 "params": { 00:10:12.051 "name": "TLSTEST", 00:10:12.051 "trtype": "TCP", 00:10:12.051 "adrfam": "IPv4", 00:10:12.051 "traddr": "10.0.0.2", 00:10:12.051 "trsvcid": "4420", 00:10:12.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.051 "prchk_reftag": false, 00:10:12.051 "prchk_guard": false, 00:10:12.051 "ctrlr_loss_timeout_sec": 0, 00:10:12.051 "reconnect_delay_sec": 0, 00:10:12.051 "fast_io_fail_timeout_sec": 0, 00:10:12.051 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:12.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.051 "hdgst": false, 00:10:12.051 "ddgst": false 00:10:12.051 } 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "method": "bdev_nvme_set_hotplug", 00:10:12.051 "params": { 00:10:12.051 "period_us": 100000, 00:10:12.051 "enable": false 00:10:12.051 } 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "method": "bdev_wait_for_examine" 00:10:12.051 } 00:10:12.051 ] 00:10:12.051 }, 00:10:12.051 { 00:10:12.051 "subsystem": "nbd", 00:10:12.051 "config": [] 00:10:12.051 } 00:10:12.051 ] 00:10:12.051 }' 00:10:12.051 [2024-12-02 07:37:37.457525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.051 [2024-12-02 07:37:37.457605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65368 ] 00:10:12.051 [2024-12-02 07:37:37.589027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.051 [2024-12-02 07:37:37.654918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.311 [2024-12-02 07:37:37.773609] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:12.879 07:37:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.879 07:37:38 -- common/autotest_common.sh@862 -- # return 0 00:10:12.879 07:37:38 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:12.879 Running I/O for 10 seconds... 00:10:25.090 00:10:25.090 Latency(us) 00:10:25.090 [2024-12-02T07:37:50.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.090 [2024-12-02T07:37:50.714Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:25.090 Verification LBA range: start 0x0 length 0x2000 00:10:25.090 TLSTESTn1 : 10.01 7399.90 28.91 0.00 0.00 17273.58 3142.75 18826.71 00:10:25.090 [2024-12-02T07:37:50.714Z] =================================================================================================================== 00:10:25.090 [2024-12-02T07:37:50.714Z] Total : 7399.90 28.91 0.00 0.00 17273.58 3142.75 18826.71 00:10:25.090 0 00:10:25.090 07:37:48 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.090 07:37:48 -- target/tls.sh@223 -- # killprocess 65368 00:10:25.090 07:37:48 -- common/autotest_common.sh@936 -- # '[' -z 65368 ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@940 -- # kill -0 65368 00:10:25.090 07:37:48 -- common/autotest_common.sh@941 -- # uname 00:10:25.090 07:37:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65368 00:10:25.090 killing process with pid 65368 00:10:25.090 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.090 00:10:25.090 Latency(us) 00:10:25.090 [2024-12-02T07:37:50.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.090 [2024-12-02T07:37:50.714Z] =================================================================================================================== 00:10:25.090 [2024-12-02T07:37:50.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.090 07:37:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:25.090 07:37:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65368' 00:10:25.090 07:37:48 -- common/autotest_common.sh@955 -- # kill 65368 00:10:25.090 07:37:48 -- common/autotest_common.sh@960 -- # wait 65368 00:10:25.090 07:37:48 -- target/tls.sh@224 -- # killprocess 65336 00:10:25.090 07:37:48 -- common/autotest_common.sh@936 -- # '[' -z 65336 ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@940 -- # kill -0 65336 00:10:25.090 07:37:48 -- common/autotest_common.sh@941 -- # uname 00:10:25.090 07:37:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65336 00:10:25.090 killing process with pid 65336 00:10:25.090 07:37:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:25.090 07:37:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65336' 00:10:25.090 07:37:48 -- common/autotest_common.sh@955 -- # kill 65336 00:10:25.090 07:37:48 -- common/autotest_common.sh@960 -- # wait 65336 00:10:25.090 07:37:48 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:10:25.090 07:37:48 -- target/tls.sh@227 -- # cleanup 00:10:25.090 07:37:48 -- target/tls.sh@15 -- # process_shm --id 0 00:10:25.090 07:37:48 -- common/autotest_common.sh@806 -- # type=--id 00:10:25.090 07:37:48 -- common/autotest_common.sh@807 -- # id=0 00:10:25.090 07:37:48 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:25.090 07:37:48 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:25.090 07:37:48 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:25.090 07:37:48 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:25.090 07:37:48 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:25.090 nvmf_trace.0 00:10:25.090 07:37:48 -- common/autotest_common.sh@821 -- # return 0 00:10:25.090 Process with pid 65368 is not found 00:10:25.090 07:37:48 -- target/tls.sh@16 -- # killprocess 65368 00:10:25.090 07:37:48 -- common/autotest_common.sh@936 -- # '[' -z 65368 ']' 00:10:25.090 07:37:48 -- common/autotest_common.sh@940 -- # kill -0 65368 00:10:25.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65368) - No such process 00:10:25.090 07:37:48 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65368 is not found' 00:10:25.090 07:37:48 -- target/tls.sh@17 -- # nvmftestfini 00:10:25.090 07:37:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:25.090 07:37:48 -- nvmf/common.sh@116 -- # sync 00:10:25.090 07:37:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:25.090 07:37:49 -- nvmf/common.sh@119 -- # set +e 00:10:25.090 07:37:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:25.090 07:37:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:25.090 rmmod nvme_tcp 00:10:25.090 rmmod nvme_fabrics 00:10:25.090 rmmod nvme_keyring 00:10:25.090 07:37:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:25.090 07:37:49 -- nvmf/common.sh@123 -- # set -e 00:10:25.090 07:37:49 -- nvmf/common.sh@124 -- # return 0 00:10:25.090 07:37:49 -- nvmf/common.sh@477 -- # '[' -n 65336 ']' 00:10:25.090 07:37:49 -- nvmf/common.sh@478 -- # killprocess 65336 00:10:25.091 07:37:49 -- common/autotest_common.sh@936 -- # '[' -z 65336 ']' 00:10:25.091 Process with pid 65336 is not found 00:10:25.091 07:37:49 -- common/autotest_common.sh@940 -- # kill -0 65336 00:10:25.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65336) - No such process 00:10:25.091 07:37:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65336 is not found' 00:10:25.091 07:37:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:25.091 07:37:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:25.091 07:37:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:25.091 07:37:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.091 07:37:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:25.091 07:37:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.091 07:37:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.091 07:37:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.091 07:37:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:25.091 07:37:49 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:25.091 00:10:25.091 real 1m8.331s 00:10:25.091 user 1m45.691s 00:10:25.091 sys 0m22.796s 00:10:25.091 07:37:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:25.091 07:37:49 -- common/autotest_common.sh@10 -- # set +x 00:10:25.091 ************************************ 00:10:25.091 END TEST nvmf_tls 00:10:25.091 ************************************ 00:10:25.091 07:37:49 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:10:25.091 07:37:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:25.091 07:37:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.091 07:37:49 -- common/autotest_common.sh@10 -- # set +x 00:10:25.091 ************************************ 00:10:25.091 START TEST nvmf_fips 00:10:25.091 ************************************ 00:10:25.091 07:37:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:10:25.091 * Looking for test storage... 00:10:25.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:10:25.091 07:37:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:25.091 07:37:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:25.091 07:37:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:25.091 07:37:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:25.091 07:37:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:25.091 07:37:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:25.091 07:37:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:25.091 07:37:49 -- scripts/common.sh@335 -- # IFS=.-: 00:10:25.091 07:37:49 -- scripts/common.sh@335 -- # read -ra ver1 00:10:25.091 07:37:49 -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.091 07:37:49 -- scripts/common.sh@336 -- # read -ra ver2 00:10:25.091 07:37:49 -- scripts/common.sh@337 -- # local 'op=<' 00:10:25.091 07:37:49 -- scripts/common.sh@339 -- # ver1_l=2 00:10:25.091 07:37:49 -- scripts/common.sh@340 -- # ver2_l=1 00:10:25.091 07:37:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:25.091 07:37:49 -- scripts/common.sh@343 -- # case "$op" in 00:10:25.091 07:37:49 -- scripts/common.sh@344 -- # : 1 00:10:25.091 07:37:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:25.091 07:37:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.091 07:37:49 -- scripts/common.sh@364 -- # decimal 1 00:10:25.091 07:37:49 -- scripts/common.sh@352 -- # local d=1 00:10:25.091 07:37:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.091 07:37:49 -- scripts/common.sh@354 -- # echo 1 00:10:25.091 07:37:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:25.091 07:37:49 -- scripts/common.sh@365 -- # decimal 2 00:10:25.091 07:37:49 -- scripts/common.sh@352 -- # local d=2 00:10:25.091 07:37:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.091 07:37:49 -- scripts/common.sh@354 -- # echo 2 00:10:25.091 07:37:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:25.091 07:37:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:25.091 07:37:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:25.091 07:37:49 -- scripts/common.sh@367 -- # return 0 00:10:25.091 07:37:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.091 07:37:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.091 --rc genhtml_branch_coverage=1 00:10:25.091 --rc genhtml_function_coverage=1 00:10:25.091 --rc genhtml_legend=1 00:10:25.091 --rc geninfo_all_blocks=1 00:10:25.091 --rc geninfo_unexecuted_blocks=1 00:10:25.091 00:10:25.091 ' 00:10:25.091 07:37:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.091 --rc genhtml_branch_coverage=1 00:10:25.091 --rc genhtml_function_coverage=1 00:10:25.091 --rc genhtml_legend=1 00:10:25.091 --rc geninfo_all_blocks=1 00:10:25.091 --rc geninfo_unexecuted_blocks=1 00:10:25.091 00:10:25.091 ' 00:10:25.091 07:37:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.091 --rc genhtml_branch_coverage=1 00:10:25.091 --rc genhtml_function_coverage=1 00:10:25.091 --rc genhtml_legend=1 00:10:25.091 --rc geninfo_all_blocks=1 00:10:25.091 --rc geninfo_unexecuted_blocks=1 00:10:25.091 00:10:25.091 ' 00:10:25.091 07:37:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.091 --rc genhtml_branch_coverage=1 00:10:25.091 --rc genhtml_function_coverage=1 00:10:25.091 --rc genhtml_legend=1 00:10:25.091 --rc geninfo_all_blocks=1 00:10:25.091 --rc geninfo_unexecuted_blocks=1 00:10:25.091 00:10:25.091 ' 00:10:25.091 07:37:49 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.091 07:37:49 -- nvmf/common.sh@7 -- # uname -s 00:10:25.091 07:37:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.091 07:37:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.091 07:37:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.091 07:37:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.091 07:37:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.091 07:37:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.091 07:37:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.091 07:37:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.091 07:37:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.091 07:37:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.091 07:37:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:10:25.091 07:37:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:10:25.091 07:37:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.091 07:37:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.091 07:37:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.091 07:37:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.091 07:37:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.091 07:37:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.091 07:37:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.091 07:37:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.091 07:37:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.091 07:37:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.091 07:37:49 -- paths/export.sh@5 -- # export PATH 00:10:25.091 07:37:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.091 07:37:49 -- nvmf/common.sh@46 -- # : 0 00:10:25.091 07:37:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:25.091 07:37:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:25.091 07:37:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:25.091 07:37:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.091 07:37:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.091 07:37:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:25.091 07:37:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:25.091 07:37:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:25.091 07:37:49 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.091 07:37:49 -- fips/fips.sh@89 -- # check_openssl_version 00:10:25.091 07:37:49 -- fips/fips.sh@83 -- # local target=3.0.0 00:10:25.091 07:37:49 -- fips/fips.sh@85 -- # openssl version 00:10:25.091 07:37:49 -- fips/fips.sh@85 -- # awk '{print $2}' 00:10:25.091 07:37:49 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:10:25.091 07:37:49 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:10:25.091 07:37:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:25.091 07:37:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:25.091 07:37:49 -- scripts/common.sh@335 -- # IFS=.-: 00:10:25.091 07:37:49 -- scripts/common.sh@335 -- # read -ra ver1 00:10:25.091 07:37:49 -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.091 07:37:49 -- scripts/common.sh@336 -- # read -ra ver2 00:10:25.092 07:37:49 -- scripts/common.sh@337 -- # local 'op=>=' 00:10:25.092 07:37:49 -- scripts/common.sh@339 -- # ver1_l=3 00:10:25.092 07:37:49 -- scripts/common.sh@340 -- # ver2_l=3 00:10:25.092 07:37:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:25.092 07:37:49 -- scripts/common.sh@343 -- # case "$op" in 00:10:25.092 07:37:49 -- scripts/common.sh@347 -- # : 1 00:10:25.092 07:37:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:25.092 07:37:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.092 07:37:49 -- scripts/common.sh@364 -- # decimal 3 00:10:25.092 07:37:49 -- scripts/common.sh@352 -- # local d=3 00:10:25.092 07:37:49 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:10:25.092 07:37:49 -- scripts/common.sh@354 -- # echo 3 00:10:25.092 07:37:49 -- scripts/common.sh@364 -- # ver1[v]=3 00:10:25.092 07:37:49 -- scripts/common.sh@365 -- # decimal 3 00:10:25.092 07:37:49 -- scripts/common.sh@352 -- # local d=3 00:10:25.092 07:37:49 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:10:25.092 07:37:49 -- scripts/common.sh@354 -- # echo 3 00:10:25.092 07:37:49 -- scripts/common.sh@365 -- # ver2[v]=3 00:10:25.092 07:37:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:25.092 07:37:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:25.092 07:37:49 -- scripts/common.sh@363 -- # (( v++ )) 00:10:25.092 07:37:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.092 07:37:49 -- scripts/common.sh@364 -- # decimal 1 00:10:25.092 07:37:49 -- scripts/common.sh@352 -- # local d=1 00:10:25.092 07:37:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.092 07:37:49 -- scripts/common.sh@354 -- # echo 1 00:10:25.092 07:37:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:25.092 07:37:49 -- scripts/common.sh@365 -- # decimal 0 00:10:25.092 07:37:49 -- scripts/common.sh@352 -- # local d=0 00:10:25.092 07:37:49 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:10:25.092 07:37:49 -- scripts/common.sh@354 -- # echo 0 00:10:25.092 07:37:49 -- scripts/common.sh@365 -- # ver2[v]=0 00:10:25.092 07:37:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:25.092 07:37:49 -- scripts/common.sh@366 -- # return 0 00:10:25.092 07:37:49 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:10:25.092 07:37:49 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:10:25.092 07:37:49 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:10:25.092 07:37:49 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:10:25.092 07:37:49 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:10:25.092 07:37:49 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:10:25.092 07:37:49 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:10:25.092 07:37:49 -- fips/fips.sh@113 -- # build_openssl_config 00:10:25.092 07:37:49 -- fips/fips.sh@37 -- # cat 00:10:25.092 07:37:49 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:10:25.092 07:37:49 -- fips/fips.sh@58 -- # cat - 00:10:25.092 07:37:49 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:10:25.092 07:37:49 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:10:25.092 07:37:49 -- fips/fips.sh@116 -- # mapfile -t providers 00:10:25.092 07:37:49 -- fips/fips.sh@116 -- # openssl list -providers 00:10:25.092 07:37:49 -- fips/fips.sh@116 -- # grep name 00:10:25.092 07:37:49 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:10:25.092 07:37:49 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:10:25.092 07:37:49 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:10:25.092 07:37:49 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:10:25.092 07:37:49 -- fips/fips.sh@127 -- # : 00:10:25.092 07:37:49 -- common/autotest_common.sh@650 -- # local es=0 00:10:25.092 07:37:49 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:10:25.092 07:37:49 -- common/autotest_common.sh@638 -- # local arg=openssl 00:10:25.092 07:37:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.092 07:37:49 -- common/autotest_common.sh@642 -- # type -t openssl 00:10:25.092 07:37:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.092 07:37:49 -- common/autotest_common.sh@644 -- # type -P openssl 00:10:25.092 07:37:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.092 07:37:49 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:10:25.092 07:37:49 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:10:25.092 07:37:49 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:10:25.092 Error setting digest 00:10:25.092 40D26D77A37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:10:25.092 40D26D77A37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:10:25.092 07:37:49 -- common/autotest_common.sh@653 -- # es=1 00:10:25.092 07:37:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.092 07:37:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:25.092 07:37:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.092 07:37:49 -- fips/fips.sh@130 -- # nvmftestinit 00:10:25.092 07:37:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:25.092 07:37:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.092 07:37:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:25.092 07:37:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:25.092 07:37:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:25.092 07:37:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.092 07:37:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.092 07:37:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.092 07:37:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:25.092 07:37:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:25.092 07:37:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:25.092 07:37:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:25.092 07:37:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:25.092 07:37:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:25.092 07:37:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.092 07:37:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.092 07:37:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:25.092 07:37:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:25.092 07:37:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.092 07:37:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.092 07:37:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.092 07:37:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.092 07:37:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.092 07:37:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.092 07:37:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.092 07:37:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.092 07:37:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:25.092 07:37:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:25.092 Cannot find device "nvmf_tgt_br" 00:10:25.092 07:37:49 -- nvmf/common.sh@154 -- # true 00:10:25.092 07:37:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.092 Cannot find device "nvmf_tgt_br2" 00:10:25.092 07:37:49 -- nvmf/common.sh@155 -- # true 00:10:25.092 07:37:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:25.092 07:37:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:25.092 Cannot find device "nvmf_tgt_br" 00:10:25.092 07:37:49 -- nvmf/common.sh@157 -- # true 00:10:25.092 07:37:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:25.092 Cannot find device "nvmf_tgt_br2" 00:10:25.092 07:37:49 -- nvmf/common.sh@158 -- # true 00:10:25.092 07:37:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:25.092 07:37:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:25.092 07:37:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.092 07:37:49 -- nvmf/common.sh@161 -- # true 00:10:25.092 07:37:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.092 07:37:49 -- nvmf/common.sh@162 -- # true 00:10:25.092 07:37:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.092 07:37:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.092 07:37:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:25.092 07:37:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:25.092 07:37:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:25.092 07:37:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:25.092 07:37:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:25.092 07:37:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:25.092 07:37:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:25.092 07:37:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:25.092 07:37:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:25.092 07:37:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:25.092 07:37:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:25.092 07:37:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.092 07:37:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.092 07:37:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.092 07:37:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:25.092 07:37:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:25.092 07:37:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.092 07:37:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.092 07:37:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.092 07:37:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.092 07:37:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.092 07:37:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:25.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:10:25.092 00:10:25.092 --- 10.0.0.2 ping statistics --- 00:10:25.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.092 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:25.092 07:37:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:25.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:10:25.093 00:10:25.093 --- 10.0.0.3 ping statistics --- 00:10:25.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.093 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:25.093 07:37:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:25.093 00:10:25.093 --- 10.0.0.1 ping statistics --- 00:10:25.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.093 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:25.093 07:37:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.093 07:37:49 -- nvmf/common.sh@421 -- # return 0 00:10:25.093 07:37:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:25.093 07:37:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.093 07:37:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:25.093 07:37:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:25.093 07:37:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.093 07:37:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:25.093 07:37:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:25.093 07:37:49 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:10:25.093 07:37:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:25.093 07:37:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.093 07:37:49 -- common/autotest_common.sh@10 -- # set +x 00:10:25.093 07:37:49 -- nvmf/common.sh@469 -- # nvmfpid=65718 00:10:25.093 07:37:49 -- nvmf/common.sh@470 -- # waitforlisten 65718 00:10:25.093 07:37:49 -- common/autotest_common.sh@829 -- # '[' -z 65718 ']' 00:10:25.093 07:37:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.093 07:37:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:25.093 07:37:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.093 07:37:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.093 07:37:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.093 07:37:49 -- common/autotest_common.sh@10 -- # set +x 00:10:25.093 [2024-12-02 07:37:49.958428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.093 [2024-12-02 07:37:49.958518] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.093 [2024-12-02 07:37:50.094595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.093 [2024-12-02 07:37:50.162477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:25.093 [2024-12-02 07:37:50.162635] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.093 [2024-12-02 07:37:50.162651] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.093 [2024-12-02 07:37:50.162661] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.093 [2024-12-02 07:37:50.162697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.353 07:37:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.353 07:37:50 -- common/autotest_common.sh@862 -- # return 0 00:10:25.353 07:37:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:25.353 07:37:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:25.353 07:37:50 -- common/autotest_common.sh@10 -- # set +x 00:10:25.353 07:37:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.353 07:37:50 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:10:25.353 07:37:50 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:10:25.353 07:37:50 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:25.353 07:37:50 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:10:25.353 07:37:50 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:25.353 07:37:50 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:25.353 07:37:50 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:25.353 07:37:50 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.612 [2024-12-02 07:37:51.149697] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.612 [2024-12-02 07:37:51.165648] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:25.612 [2024-12-02 07:37:51.165800] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.612 malloc0 00:10:25.612 07:37:51 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:25.612 07:37:51 -- fips/fips.sh@147 -- # bdevperf_pid=65763 00:10:25.612 07:37:51 -- fips/fips.sh@148 -- # waitforlisten 65763 /var/tmp/bdevperf.sock 00:10:25.612 07:37:51 -- common/autotest_common.sh@829 -- # '[' -z 65763 ']' 00:10:25.612 07:37:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:25.612 07:37:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:25.612 07:37:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:25.612 07:37:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.612 07:37:51 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:25.612 07:37:51 -- common/autotest_common.sh@10 -- # set +x 00:10:25.872 [2024-12-02 07:37:51.288777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.872 [2024-12-02 07:37:51.288864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65763 ] 00:10:25.872 [2024-12-02 07:37:51.428931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.131 [2024-12-02 07:37:51.495549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.700 07:37:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.700 07:37:52 -- common/autotest_common.sh@862 -- # return 0 00:10:26.700 07:37:52 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:26.960 [2024-12-02 07:37:52.371031] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:26.960 TLSTESTn1 00:10:26.960 07:37:52 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.960 Running I/O for 10 seconds... 00:10:37.013 00:10:37.013 Latency(us) 00:10:37.013 [2024-12-02T07:38:02.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.013 [2024-12-02T07:38:02.637Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:37.013 Verification LBA range: start 0x0 length 0x2000 00:10:37.013 TLSTESTn1 : 10.02 6012.92 23.49 0.00 0.00 21252.00 4468.36 24069.59 00:10:37.013 [2024-12-02T07:38:02.637Z] =================================================================================================================== 00:10:37.013 [2024-12-02T07:38:02.637Z] Total : 6012.92 23.49 0.00 0.00 21252.00 4468.36 24069.59 00:10:37.013 0 00:10:37.013 07:38:02 -- fips/fips.sh@1 -- # cleanup 00:10:37.013 07:38:02 -- fips/fips.sh@15 -- # process_shm --id 0 00:10:37.013 07:38:02 -- common/autotest_common.sh@806 -- # type=--id 00:10:37.013 07:38:02 -- common/autotest_common.sh@807 -- # id=0 00:10:37.013 07:38:02 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:37.013 07:38:02 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:37.013 07:38:02 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:37.014 07:38:02 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:37.014 07:38:02 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:37.014 07:38:02 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:37.014 nvmf_trace.0 00:10:37.273 07:38:02 -- common/autotest_common.sh@821 -- # return 0 00:10:37.273 07:38:02 -- fips/fips.sh@16 -- # killprocess 65763 00:10:37.273 07:38:02 -- common/autotest_common.sh@936 -- # '[' -z 65763 ']' 00:10:37.273 07:38:02 -- common/autotest_common.sh@940 -- # kill -0 65763 00:10:37.273 07:38:02 -- common/autotest_common.sh@941 -- # uname 00:10:37.273 07:38:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:37.273 07:38:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65763 00:10:37.273 07:38:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:37.273 07:38:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:37.273 killing process with pid 65763 00:10:37.273 07:38:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65763' 00:10:37.273 07:38:02 -- common/autotest_common.sh@955 -- # kill 65763 00:10:37.273 Received shutdown signal, test time was about 10.000000 seconds 00:10:37.273 00:10:37.273 Latency(us) 00:10:37.273 [2024-12-02T07:38:02.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.273 [2024-12-02T07:38:02.897Z] =================================================================================================================== 00:10:37.273 [2024-12-02T07:38:02.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:37.273 07:38:02 -- common/autotest_common.sh@960 -- # wait 65763 00:10:37.273 07:38:02 -- fips/fips.sh@17 -- # nvmftestfini 00:10:37.273 07:38:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:37.273 07:38:02 -- nvmf/common.sh@116 -- # sync 00:10:37.533 07:38:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:37.533 07:38:02 -- nvmf/common.sh@119 -- # set +e 00:10:37.533 07:38:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:37.533 07:38:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:37.533 rmmod nvme_tcp 00:10:37.533 rmmod nvme_fabrics 00:10:37.533 rmmod nvme_keyring 00:10:37.533 07:38:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:37.533 07:38:02 -- nvmf/common.sh@123 -- # set -e 00:10:37.533 07:38:02 -- nvmf/common.sh@124 -- # return 0 00:10:37.533 07:38:02 -- nvmf/common.sh@477 -- # '[' -n 65718 ']' 00:10:37.533 07:38:02 -- nvmf/common.sh@478 -- # killprocess 65718 00:10:37.533 07:38:02 -- common/autotest_common.sh@936 -- # '[' -z 65718 ']' 00:10:37.533 07:38:02 -- common/autotest_common.sh@940 -- # kill -0 65718 00:10:37.533 07:38:02 -- common/autotest_common.sh@941 -- # uname 00:10:37.533 07:38:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:37.533 07:38:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65718 00:10:37.533 07:38:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:37.533 07:38:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:37.533 07:38:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65718' 00:10:37.533 killing process with pid 65718 00:10:37.533 07:38:03 -- common/autotest_common.sh@955 -- # kill 65718 00:10:37.533 07:38:03 -- common/autotest_common.sh@960 -- # wait 65718 00:10:37.792 07:38:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:37.792 07:38:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:37.792 07:38:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:37.792 07:38:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.792 07:38:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:37.792 07:38:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.792 07:38:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.792 07:38:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.792 07:38:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:37.792 07:38:03 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:37.792 00:10:37.792 real 0m14.046s 00:10:37.792 user 0m19.061s 00:10:37.792 sys 0m5.588s 00:10:37.792 07:38:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:37.792 07:38:03 -- common/autotest_common.sh@10 -- # set +x 00:10:37.792 ************************************ 00:10:37.792 END TEST nvmf_fips 00:10:37.792 ************************************ 00:10:37.792 07:38:03 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:10:37.792 07:38:03 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:10:37.792 07:38:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:37.792 07:38:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.792 07:38:03 -- common/autotest_common.sh@10 -- # set +x 00:10:37.792 ************************************ 00:10:37.792 START TEST nvmf_fuzz 00:10:37.792 ************************************ 00:10:37.792 07:38:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:10:37.792 * Looking for test storage... 00:10:37.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.792 07:38:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:37.792 07:38:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:37.792 07:38:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:38.051 07:38:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:38.051 07:38:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:38.051 07:38:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:38.051 07:38:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:38.051 07:38:03 -- scripts/common.sh@335 -- # IFS=.-: 00:10:38.051 07:38:03 -- scripts/common.sh@335 -- # read -ra ver1 00:10:38.051 07:38:03 -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.051 07:38:03 -- scripts/common.sh@336 -- # read -ra ver2 00:10:38.051 07:38:03 -- scripts/common.sh@337 -- # local 'op=<' 00:10:38.051 07:38:03 -- scripts/common.sh@339 -- # ver1_l=2 00:10:38.051 07:38:03 -- scripts/common.sh@340 -- # ver2_l=1 00:10:38.051 07:38:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:38.051 07:38:03 -- scripts/common.sh@343 -- # case "$op" in 00:10:38.051 07:38:03 -- scripts/common.sh@344 -- # : 1 00:10:38.051 07:38:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:38.051 07:38:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.051 07:38:03 -- scripts/common.sh@364 -- # decimal 1 00:10:38.051 07:38:03 -- scripts/common.sh@352 -- # local d=1 00:10:38.051 07:38:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.051 07:38:03 -- scripts/common.sh@354 -- # echo 1 00:10:38.051 07:38:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:38.051 07:38:03 -- scripts/common.sh@365 -- # decimal 2 00:10:38.051 07:38:03 -- scripts/common.sh@352 -- # local d=2 00:10:38.051 07:38:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.051 07:38:03 -- scripts/common.sh@354 -- # echo 2 00:10:38.051 07:38:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:38.051 07:38:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:38.051 07:38:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:38.051 07:38:03 -- scripts/common.sh@367 -- # return 0 00:10:38.051 07:38:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.051 07:38:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.051 --rc genhtml_branch_coverage=1 00:10:38.051 --rc genhtml_function_coverage=1 00:10:38.051 --rc genhtml_legend=1 00:10:38.051 --rc geninfo_all_blocks=1 00:10:38.052 --rc geninfo_unexecuted_blocks=1 00:10:38.052 00:10:38.052 ' 00:10:38.052 07:38:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:38.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.052 --rc genhtml_branch_coverage=1 00:10:38.052 --rc genhtml_function_coverage=1 00:10:38.052 --rc genhtml_legend=1 00:10:38.052 --rc geninfo_all_blocks=1 00:10:38.052 --rc geninfo_unexecuted_blocks=1 00:10:38.052 00:10:38.052 ' 00:10:38.052 07:38:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:38.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.052 --rc genhtml_branch_coverage=1 00:10:38.052 --rc genhtml_function_coverage=1 00:10:38.052 --rc genhtml_legend=1 00:10:38.052 --rc geninfo_all_blocks=1 00:10:38.052 --rc geninfo_unexecuted_blocks=1 00:10:38.052 00:10:38.052 ' 00:10:38.052 07:38:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:38.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.052 --rc genhtml_branch_coverage=1 00:10:38.052 --rc genhtml_function_coverage=1 00:10:38.052 --rc genhtml_legend=1 00:10:38.052 --rc geninfo_all_blocks=1 00:10:38.052 --rc geninfo_unexecuted_blocks=1 00:10:38.052 00:10:38.052 ' 00:10:38.052 07:38:03 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.052 07:38:03 -- nvmf/common.sh@7 -- # uname -s 00:10:38.052 07:38:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.052 07:38:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.052 07:38:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.052 07:38:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.052 07:38:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.052 07:38:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.052 07:38:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.052 07:38:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.052 07:38:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.052 07:38:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.052 07:38:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:10:38.052 07:38:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:10:38.052 07:38:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.052 07:38:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.052 07:38:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.052 07:38:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.052 07:38:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.052 07:38:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.052 07:38:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.052 07:38:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.052 07:38:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.052 07:38:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.052 07:38:03 -- paths/export.sh@5 -- # export PATH 00:10:38.052 07:38:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.052 07:38:03 -- nvmf/common.sh@46 -- # : 0 00:10:38.052 07:38:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:38.052 07:38:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:38.052 07:38:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:38.052 07:38:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.052 07:38:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.052 07:38:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:38.052 07:38:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:38.052 07:38:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:38.052 07:38:03 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:10:38.052 07:38:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:38.052 07:38:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.052 07:38:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:38.052 07:38:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:38.052 07:38:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:38.052 07:38:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.052 07:38:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.052 07:38:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.052 07:38:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:38.052 07:38:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:38.052 07:38:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:38.052 07:38:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:38.052 07:38:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:38.052 07:38:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:38.052 07:38:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.052 07:38:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.052 07:38:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:38.052 07:38:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:38.052 07:38:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:38.052 07:38:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:38.052 07:38:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:38.052 07:38:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.052 07:38:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:38.052 07:38:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:38.052 07:38:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:38.052 07:38:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:38.052 07:38:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:38.052 07:38:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:38.052 Cannot find device "nvmf_tgt_br" 00:10:38.052 07:38:03 -- nvmf/common.sh@154 -- # true 00:10:38.052 07:38:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.052 Cannot find device "nvmf_tgt_br2" 00:10:38.052 07:38:03 -- nvmf/common.sh@155 -- # true 00:10:38.052 07:38:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:38.052 07:38:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:38.052 Cannot find device "nvmf_tgt_br" 00:10:38.052 07:38:03 -- nvmf/common.sh@157 -- # true 00:10:38.052 07:38:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:38.052 Cannot find device "nvmf_tgt_br2" 00:10:38.052 07:38:03 -- nvmf/common.sh@158 -- # true 00:10:38.052 07:38:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:38.052 07:38:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:38.052 07:38:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.052 07:38:03 -- nvmf/common.sh@161 -- # true 00:10:38.052 07:38:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.052 07:38:03 -- nvmf/common.sh@162 -- # true 00:10:38.052 07:38:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:38.052 07:38:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:38.052 07:38:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:38.052 07:38:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:38.052 07:38:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:38.052 07:38:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:38.052 07:38:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:38.052 07:38:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:38.313 07:38:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:38.313 07:38:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:38.313 07:38:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:38.313 07:38:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:38.313 07:38:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:38.313 07:38:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:38.313 07:38:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:38.313 07:38:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:38.313 07:38:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:38.313 07:38:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:38.313 07:38:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:38.313 07:38:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:38.313 07:38:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:38.313 07:38:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:38.313 07:38:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:38.313 07:38:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:38.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:38.313 00:10:38.313 --- 10.0.0.2 ping statistics --- 00:10:38.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.313 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:38.313 07:38:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:38.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:38.313 00:10:38.313 --- 10.0.0.3 ping statistics --- 00:10:38.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.313 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:38.313 07:38:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:38.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:38.313 00:10:38.313 --- 10.0.0.1 ping statistics --- 00:10:38.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.313 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:38.313 07:38:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.313 07:38:03 -- nvmf/common.sh@421 -- # return 0 00:10:38.313 07:38:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:38.313 07:38:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.313 07:38:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:38.313 07:38:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:38.313 07:38:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.313 07:38:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:38.313 07:38:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:38.313 07:38:03 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=66095 00:10:38.313 07:38:03 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:38.313 07:38:03 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 66095 00:10:38.313 07:38:03 -- common/autotest_common.sh@829 -- # '[' -z 66095 ']' 00:10:38.313 07:38:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.313 07:38:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.313 07:38:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.313 07:38:03 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:38.313 07:38:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.313 07:38:03 -- common/autotest_common.sh@10 -- # set +x 00:10:39.689 07:38:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.689 07:38:04 -- common/autotest_common.sh@862 -- # return 0 00:10:39.689 07:38:04 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.689 07:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.689 07:38:04 -- common/autotest_common.sh@10 -- # set +x 00:10:39.689 07:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.689 07:38:04 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:10:39.689 07:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.689 07:38:04 -- common/autotest_common.sh@10 -- # set +x 00:10:39.689 Malloc0 00:10:39.689 07:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.689 07:38:04 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.689 07:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.689 07:38:04 -- common/autotest_common.sh@10 -- # set +x 00:10:39.689 07:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.689 07:38:04 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.689 07:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.689 07:38:04 -- common/autotest_common.sh@10 -- # set +x 00:10:39.689 07:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.689 07:38:04 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.689 07:38:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.689 07:38:04 -- common/autotest_common.sh@10 -- # set +x 00:10:39.689 07:38:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.689 07:38:04 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:10:39.689 07:38:04 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:10:39.689 Shutting down the fuzz application 00:10:39.689 07:38:05 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:10:40.256 Shutting down the fuzz application 00:10:40.256 07:38:05 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.256 07:38:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.256 07:38:05 -- common/autotest_common.sh@10 -- # set +x 00:10:40.256 07:38:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.256 07:38:05 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:40.256 07:38:05 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:10:40.256 07:38:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:40.256 07:38:05 -- nvmf/common.sh@116 -- # sync 00:10:40.256 07:38:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:40.256 07:38:05 -- nvmf/common.sh@119 -- # set +e 00:10:40.256 07:38:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:40.256 07:38:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:40.256 rmmod nvme_tcp 00:10:40.256 rmmod nvme_fabrics 00:10:40.256 rmmod nvme_keyring 00:10:40.256 07:38:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:40.256 07:38:05 -- nvmf/common.sh@123 -- # set -e 00:10:40.256 07:38:05 -- nvmf/common.sh@124 -- # return 0 00:10:40.256 07:38:05 -- nvmf/common.sh@477 -- # '[' -n 66095 ']' 00:10:40.256 07:38:05 -- nvmf/common.sh@478 -- # killprocess 66095 00:10:40.256 07:38:05 -- common/autotest_common.sh@936 -- # '[' -z 66095 ']' 00:10:40.256 07:38:05 -- common/autotest_common.sh@940 -- # kill -0 66095 00:10:40.256 07:38:05 -- common/autotest_common.sh@941 -- # uname 00:10:40.256 07:38:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:40.256 07:38:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66095 00:10:40.256 07:38:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:40.256 07:38:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:40.256 killing process with pid 66095 00:10:40.256 07:38:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66095' 00:10:40.256 07:38:05 -- common/autotest_common.sh@955 -- # kill 66095 00:10:40.256 07:38:05 -- common/autotest_common.sh@960 -- # wait 66095 00:10:40.514 07:38:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:40.514 07:38:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:40.514 07:38:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:40.514 07:38:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.514 07:38:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:40.514 07:38:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.514 07:38:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.514 07:38:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.514 07:38:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:40.514 07:38:05 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:10:40.514 00:10:40.514 real 0m2.722s 00:10:40.514 user 0m2.988s 00:10:40.514 sys 0m0.534s 00:10:40.514 07:38:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:40.514 07:38:05 -- common/autotest_common.sh@10 -- # set +x 00:10:40.514 ************************************ 00:10:40.514 END TEST nvmf_fuzz 00:10:40.514 ************************************ 00:10:40.514 07:38:06 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:10:40.514 07:38:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:40.514 07:38:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.514 07:38:06 -- common/autotest_common.sh@10 -- # set +x 00:10:40.514 ************************************ 00:10:40.514 START TEST nvmf_multiconnection 00:10:40.514 ************************************ 00:10:40.515 07:38:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:10:40.515 * Looking for test storage... 00:10:40.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.515 07:38:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:40.515 07:38:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:40.515 07:38:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:40.774 07:38:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:40.774 07:38:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:40.774 07:38:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:40.774 07:38:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:40.774 07:38:06 -- scripts/common.sh@335 -- # IFS=.-: 00:10:40.774 07:38:06 -- scripts/common.sh@335 -- # read -ra ver1 00:10:40.774 07:38:06 -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.774 07:38:06 -- scripts/common.sh@336 -- # read -ra ver2 00:10:40.774 07:38:06 -- scripts/common.sh@337 -- # local 'op=<' 00:10:40.774 07:38:06 -- scripts/common.sh@339 -- # ver1_l=2 00:10:40.774 07:38:06 -- scripts/common.sh@340 -- # ver2_l=1 00:10:40.774 07:38:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:40.774 07:38:06 -- scripts/common.sh@343 -- # case "$op" in 00:10:40.774 07:38:06 -- scripts/common.sh@344 -- # : 1 00:10:40.774 07:38:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:40.774 07:38:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.774 07:38:06 -- scripts/common.sh@364 -- # decimal 1 00:10:40.774 07:38:06 -- scripts/common.sh@352 -- # local d=1 00:10:40.774 07:38:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.774 07:38:06 -- scripts/common.sh@354 -- # echo 1 00:10:40.774 07:38:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:40.774 07:38:06 -- scripts/common.sh@365 -- # decimal 2 00:10:40.774 07:38:06 -- scripts/common.sh@352 -- # local d=2 00:10:40.774 07:38:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.774 07:38:06 -- scripts/common.sh@354 -- # echo 2 00:10:40.774 07:38:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:40.774 07:38:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:40.774 07:38:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:40.774 07:38:06 -- scripts/common.sh@367 -- # return 0 00:10:40.774 07:38:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.774 07:38:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:40.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.774 --rc genhtml_branch_coverage=1 00:10:40.774 --rc genhtml_function_coverage=1 00:10:40.774 --rc genhtml_legend=1 00:10:40.774 --rc geninfo_all_blocks=1 00:10:40.774 --rc geninfo_unexecuted_blocks=1 00:10:40.774 00:10:40.774 ' 00:10:40.774 07:38:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:40.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.774 --rc genhtml_branch_coverage=1 00:10:40.774 --rc genhtml_function_coverage=1 00:10:40.774 --rc genhtml_legend=1 00:10:40.774 --rc geninfo_all_blocks=1 00:10:40.774 --rc geninfo_unexecuted_blocks=1 00:10:40.774 00:10:40.774 ' 00:10:40.774 07:38:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:40.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.774 --rc genhtml_branch_coverage=1 00:10:40.774 --rc genhtml_function_coverage=1 00:10:40.774 --rc genhtml_legend=1 00:10:40.774 --rc geninfo_all_blocks=1 00:10:40.774 --rc geninfo_unexecuted_blocks=1 00:10:40.774 00:10:40.774 ' 00:10:40.774 07:38:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:40.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.774 --rc genhtml_branch_coverage=1 00:10:40.774 --rc genhtml_function_coverage=1 00:10:40.774 --rc genhtml_legend=1 00:10:40.774 --rc geninfo_all_blocks=1 00:10:40.774 --rc geninfo_unexecuted_blocks=1 00:10:40.774 00:10:40.774 ' 00:10:40.774 07:38:06 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.774 07:38:06 -- nvmf/common.sh@7 -- # uname -s 00:10:40.774 07:38:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.774 07:38:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.774 07:38:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.774 07:38:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.774 07:38:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.774 07:38:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.774 07:38:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.774 07:38:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.774 07:38:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.774 07:38:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.774 07:38:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:10:40.774 07:38:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:10:40.774 07:38:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.774 07:38:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.774 07:38:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.774 07:38:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.774 07:38:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.774 07:38:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.774 07:38:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.774 07:38:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.774 07:38:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.774 07:38:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.774 07:38:06 -- paths/export.sh@5 -- # export PATH 00:10:40.774 07:38:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.774 07:38:06 -- nvmf/common.sh@46 -- # : 0 00:10:40.774 07:38:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:40.774 07:38:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:40.774 07:38:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:40.774 07:38:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.774 07:38:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.774 07:38:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:40.774 07:38:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:40.774 07:38:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:40.774 07:38:06 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.774 07:38:06 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.774 07:38:06 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:10:40.774 07:38:06 -- target/multiconnection.sh@16 -- # nvmftestinit 00:10:40.774 07:38:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:40.774 07:38:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.774 07:38:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:40.774 07:38:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:40.774 07:38:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:40.774 07:38:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.774 07:38:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.774 07:38:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.774 07:38:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:40.774 07:38:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:40.774 07:38:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:40.774 07:38:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:40.774 07:38:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:40.774 07:38:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:40.774 07:38:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.774 07:38:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.774 07:38:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.774 07:38:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:40.775 07:38:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.775 07:38:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.775 07:38:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.775 07:38:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.775 07:38:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.775 07:38:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.775 07:38:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.775 07:38:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.775 07:38:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:40.775 07:38:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:40.775 Cannot find device "nvmf_tgt_br" 00:10:40.775 07:38:06 -- nvmf/common.sh@154 -- # true 00:10:40.775 07:38:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.775 Cannot find device "nvmf_tgt_br2" 00:10:40.775 07:38:06 -- nvmf/common.sh@155 -- # true 00:10:40.775 07:38:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:40.775 07:38:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:40.775 Cannot find device "nvmf_tgt_br" 00:10:40.775 07:38:06 -- nvmf/common.sh@157 -- # true 00:10:40.775 07:38:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:40.775 Cannot find device "nvmf_tgt_br2" 00:10:40.775 07:38:06 -- nvmf/common.sh@158 -- # true 00:10:40.775 07:38:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:40.775 07:38:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:40.775 07:38:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.775 07:38:06 -- nvmf/common.sh@161 -- # true 00:10:40.775 07:38:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.775 07:38:06 -- nvmf/common.sh@162 -- # true 00:10:40.775 07:38:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.775 07:38:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.775 07:38:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.775 07:38:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.775 07:38:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.775 07:38:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.034 07:38:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.034 07:38:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:41.034 07:38:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:41.034 07:38:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:41.034 07:38:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:41.034 07:38:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:41.034 07:38:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:41.034 07:38:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.034 07:38:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.034 07:38:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.034 07:38:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:41.034 07:38:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:41.034 07:38:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.034 07:38:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.034 07:38:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.034 07:38:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.034 07:38:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.034 07:38:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:41.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:41.034 00:10:41.034 --- 10.0.0.2 ping statistics --- 00:10:41.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.034 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:41.034 07:38:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:41.034 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.034 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:10:41.034 00:10:41.034 --- 10.0.0.3 ping statistics --- 00:10:41.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.034 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:41.034 07:38:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:41.034 00:10:41.034 --- 10.0.0.1 ping statistics --- 00:10:41.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.034 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:41.034 07:38:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.034 07:38:06 -- nvmf/common.sh@421 -- # return 0 00:10:41.034 07:38:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:41.034 07:38:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.034 07:38:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:41.034 07:38:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:41.034 07:38:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.034 07:38:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:41.034 07:38:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:41.034 07:38:06 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:10:41.034 07:38:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:41.034 07:38:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.034 07:38:06 -- common/autotest_common.sh@10 -- # set +x 00:10:41.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.034 07:38:06 -- nvmf/common.sh@469 -- # nvmfpid=66301 00:10:41.034 07:38:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.034 07:38:06 -- nvmf/common.sh@470 -- # waitforlisten 66301 00:10:41.034 07:38:06 -- common/autotest_common.sh@829 -- # '[' -z 66301 ']' 00:10:41.034 07:38:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.034 07:38:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.034 07:38:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.034 07:38:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.034 07:38:06 -- common/autotest_common.sh@10 -- # set +x 00:10:41.034 [2024-12-02 07:38:06.593391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:41.034 [2024-12-02 07:38:06.593472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.293 [2024-12-02 07:38:06.717920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.293 [2024-12-02 07:38:06.771074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:41.293 [2024-12-02 07:38:06.771686] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.293 [2024-12-02 07:38:06.771943] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.293 [2024-12-02 07:38:06.772144] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.293 [2024-12-02 07:38:06.772506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.293 [2024-12-02 07:38:06.772594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.293 [2024-12-02 07:38:06.772721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.293 [2024-12-02 07:38:06.772724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.230 07:38:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.230 07:38:07 -- common/autotest_common.sh@862 -- # return 0 00:10:42.230 07:38:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:42.230 07:38:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.230 07:38:07 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 [2024-12-02 07:38:07.617827] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@21 -- # seq 1 11 00:10:42.230 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.230 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 Malloc1 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 [2024-12-02 07:38:07.676801] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.230 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 Malloc2 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.230 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 Malloc3 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.230 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 Malloc4 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.230 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 Malloc5 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.230 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.230 Malloc6 00:10:42.230 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.230 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:10:42.230 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.230 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.490 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 Malloc7 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.490 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 Malloc8 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.490 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 Malloc9 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.490 07:38:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:10:42.490 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.490 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.490 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:10:42.491 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:10:42.491 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.491 07:38:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:10:42.491 07:38:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 Malloc10 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:10:42.491 07:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:10:42.491 07:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:10:42.491 07:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.491 07:38:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:10:42.491 07:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 Malloc11 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:10:42.491 07:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:10:42.491 07:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:10:42.491 07:38:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.491 07:38:08 -- common/autotest_common.sh@10 -- # set +x 00:10:42.491 07:38:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.491 07:38:08 -- target/multiconnection.sh@28 -- # seq 1 11 00:10:42.491 07:38:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:42.491 07:38:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.750 07:38:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:10:42.750 07:38:08 -- common/autotest_common.sh@1187 -- # local i=0 00:10:42.750 07:38:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.750 07:38:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:42.750 07:38:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:44.654 07:38:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:44.654 07:38:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:44.654 07:38:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:10:44.654 07:38:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:44.654 07:38:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.654 07:38:10 -- common/autotest_common.sh@1197 -- # return 0 00:10:44.654 07:38:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:44.654 07:38:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:10:44.912 07:38:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:10:44.912 07:38:10 -- common/autotest_common.sh@1187 -- # local i=0 00:10:44.912 07:38:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.912 07:38:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:44.912 07:38:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:46.815 07:38:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:46.815 07:38:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:46.815 07:38:12 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:10:46.815 07:38:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:46.815 07:38:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.815 07:38:12 -- common/autotest_common.sh@1197 -- # return 0 00:10:46.815 07:38:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:46.815 07:38:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:10:47.074 07:38:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:10:47.074 07:38:12 -- common/autotest_common.sh@1187 -- # local i=0 00:10:47.074 07:38:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.074 07:38:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:47.074 07:38:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:48.977 07:38:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:48.977 07:38:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:48.977 07:38:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:10:48.977 07:38:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:48.977 07:38:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.977 07:38:14 -- common/autotest_common.sh@1197 -- # return 0 00:10:48.977 07:38:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:48.977 07:38:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:10:49.236 07:38:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:10:49.236 07:38:14 -- common/autotest_common.sh@1187 -- # local i=0 00:10:49.236 07:38:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.236 07:38:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:49.236 07:38:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:51.138 07:38:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:51.138 07:38:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:51.138 07:38:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:10:51.138 07:38:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:51.138 07:38:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.138 07:38:16 -- common/autotest_common.sh@1197 -- # return 0 00:10:51.138 07:38:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:51.138 07:38:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:10:51.397 07:38:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:10:51.397 07:38:16 -- common/autotest_common.sh@1187 -- # local i=0 00:10:51.397 07:38:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.397 07:38:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:51.397 07:38:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:53.300 07:38:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:53.300 07:38:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:53.300 07:38:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:10:53.300 07:38:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:53.300 07:38:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.300 07:38:18 -- common/autotest_common.sh@1197 -- # return 0 00:10:53.300 07:38:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:53.300 07:38:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:10:53.558 07:38:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:10:53.558 07:38:18 -- common/autotest_common.sh@1187 -- # local i=0 00:10:53.558 07:38:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.558 07:38:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:53.558 07:38:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:55.463 07:38:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:55.463 07:38:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:55.463 07:38:20 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:10:55.463 07:38:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:55.463 07:38:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.463 07:38:21 -- common/autotest_common.sh@1197 -- # return 0 00:10:55.463 07:38:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:55.463 07:38:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:10:55.722 07:38:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:10:55.722 07:38:21 -- common/autotest_common.sh@1187 -- # local i=0 00:10:55.722 07:38:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.722 07:38:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:55.722 07:38:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:57.627 07:38:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:57.627 07:38:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:57.627 07:38:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:10:57.627 07:38:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:57.627 07:38:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.627 07:38:23 -- common/autotest_common.sh@1197 -- # return 0 00:10:57.627 07:38:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:57.627 07:38:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:10:57.886 07:38:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:10:57.886 07:38:23 -- common/autotest_common.sh@1187 -- # local i=0 00:10:57.886 07:38:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.886 07:38:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:57.886 07:38:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:59.789 07:38:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:59.789 07:38:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:59.789 07:38:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:10:59.789 07:38:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:59.789 07:38:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.789 07:38:25 -- common/autotest_common.sh@1197 -- # return 0 00:10:59.789 07:38:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:10:59.789 07:38:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:11:00.047 07:38:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:11:00.047 07:38:25 -- common/autotest_common.sh@1187 -- # local i=0 00:11:00.047 07:38:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.047 07:38:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:00.047 07:38:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:01.950 07:38:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:01.950 07:38:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:01.950 07:38:27 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:11:01.950 07:38:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:01.950 07:38:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.950 07:38:27 -- common/autotest_common.sh@1197 -- # return 0 00:11:01.950 07:38:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:01.950 07:38:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:11:02.208 07:38:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:11:02.208 07:38:27 -- common/autotest_common.sh@1187 -- # local i=0 00:11:02.208 07:38:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.208 07:38:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:02.208 07:38:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:04.112 07:38:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:04.112 07:38:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:04.112 07:38:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:11:04.112 07:38:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:04.112 07:38:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.112 07:38:29 -- common/autotest_common.sh@1197 -- # return 0 00:11:04.112 07:38:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:04.112 07:38:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:11:04.372 07:38:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:11:04.372 07:38:29 -- common/autotest_common.sh@1187 -- # local i=0 00:11:04.372 07:38:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.372 07:38:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:04.372 07:38:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:06.274 07:38:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:06.274 07:38:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:06.274 07:38:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:11:06.274 07:38:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:06.274 07:38:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.274 07:38:31 -- common/autotest_common.sh@1197 -- # return 0 00:11:06.274 07:38:31 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:11:06.274 [global] 00:11:06.274 thread=1 00:11:06.274 invalidate=1 00:11:06.274 rw=read 00:11:06.274 time_based=1 00:11:06.274 runtime=10 00:11:06.274 ioengine=libaio 00:11:06.274 direct=1 00:11:06.274 bs=262144 00:11:06.274 iodepth=64 00:11:06.274 norandommap=1 00:11:06.274 numjobs=1 00:11:06.274 00:11:06.274 [job0] 00:11:06.274 filename=/dev/nvme0n1 00:11:06.274 [job1] 00:11:06.274 filename=/dev/nvme10n1 00:11:06.274 [job2] 00:11:06.274 filename=/dev/nvme1n1 00:11:06.274 [job3] 00:11:06.274 filename=/dev/nvme2n1 00:11:06.274 [job4] 00:11:06.274 filename=/dev/nvme3n1 00:11:06.533 [job5] 00:11:06.533 filename=/dev/nvme4n1 00:11:06.533 [job6] 00:11:06.533 filename=/dev/nvme5n1 00:11:06.533 [job7] 00:11:06.533 filename=/dev/nvme6n1 00:11:06.533 [job8] 00:11:06.533 filename=/dev/nvme7n1 00:11:06.533 [job9] 00:11:06.533 filename=/dev/nvme8n1 00:11:06.533 [job10] 00:11:06.533 filename=/dev/nvme9n1 00:11:06.533 Could not set queue depth (nvme0n1) 00:11:06.533 Could not set queue depth (nvme10n1) 00:11:06.533 Could not set queue depth (nvme1n1) 00:11:06.533 Could not set queue depth (nvme2n1) 00:11:06.533 Could not set queue depth (nvme3n1) 00:11:06.533 Could not set queue depth (nvme4n1) 00:11:06.533 Could not set queue depth (nvme5n1) 00:11:06.533 Could not set queue depth (nvme6n1) 00:11:06.533 Could not set queue depth (nvme7n1) 00:11:06.533 Could not set queue depth (nvme8n1) 00:11:06.533 Could not set queue depth (nvme9n1) 00:11:06.792 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:06.792 fio-3.35 00:11:06.792 Starting 11 threads 00:11:19.087 00:11:19.087 job0: (groupid=0, jobs=1): err= 0: pid=66764: Mon Dec 2 07:38:42 2024 00:11:19.087 read: IOPS=694, BW=174MiB/s (182MB/s)(1746MiB/10054msec) 00:11:19.087 slat (usec): min=20, max=38862, avg=1419.37, stdev=3242.24 00:11:19.087 clat (msec): min=37, max=147, avg=90.60, stdev=11.00 00:11:19.087 lat (msec): min=38, max=149, avg=92.02, stdev=11.09 00:11:19.087 clat percentiles (msec): 00:11:19.087 | 1.00th=[ 70], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 83], 00:11:19.087 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:11:19.087 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 103], 95.00th=[ 113], 00:11:19.087 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 148], 00:11:19.087 | 99.99th=[ 148] 00:11:19.087 bw ( KiB/s): min=139776, max=188416, per=8.24%, avg=177142.35, stdev=12841.20, samples=20 00:11:19.087 iops : min= 546, max= 736, avg=691.85, stdev=50.15, samples=20 00:11:19.087 lat (msec) : 50=0.33%, 100=86.36%, 250=13.31% 00:11:19.087 cpu : usr=0.45%, sys=2.46%, ctx=1498, majf=0, minf=4097 00:11:19.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:19.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.087 issued rwts: total=6985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.087 job1: (groupid=0, jobs=1): err= 0: pid=66765: Mon Dec 2 07:38:42 2024 00:11:19.087 read: IOPS=603, BW=151MiB/s (158MB/s)(1520MiB/10081msec) 00:11:19.087 slat (usec): min=20, max=45320, avg=1640.43, stdev=3610.67 00:11:19.087 clat (msec): min=30, max=198, avg=104.34, stdev=19.50 00:11:19.087 lat (msec): min=30, max=198, avg=105.98, stdev=19.75 00:11:19.087 clat percentiles (msec): 00:11:19.087 | 1.00th=[ 55], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 89], 00:11:19.087 | 30.00th=[ 102], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 113], 00:11:19.087 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 125], 00:11:19.087 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 192], 99.95th=[ 192], 00:11:19.087 | 99.99th=[ 199] 00:11:19.087 bw ( KiB/s): min=136431, max=230912, per=7.16%, avg=154003.60, stdev=26691.50, samples=20 00:11:19.087 iops : min= 532, max= 902, avg=601.45, stdev=104.29, samples=20 00:11:19.087 lat (msec) : 50=0.43%, 100=28.62%, 250=70.95% 00:11:19.087 cpu : usr=0.41%, sys=2.48%, ctx=1385, majf=0, minf=4097 00:11:19.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:19.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.087 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.087 job2: (groupid=0, jobs=1): err= 0: pid=66766: Mon Dec 2 07:38:42 2024 00:11:19.087 read: IOPS=1939, BW=485MiB/s (509MB/s)(4854MiB/10008msec) 00:11:19.087 slat (usec): min=20, max=25564, avg=511.92, stdev=1136.38 00:11:19.087 clat (usec): min=6068, max=90697, avg=32454.65, stdev=7164.48 00:11:19.087 lat (usec): min=8804, max=90738, avg=32966.57, stdev=7242.43 00:11:19.087 clat percentiles (usec): 00:11:19.087 | 1.00th=[27132], 5.00th=[28443], 10.00th=[29230], 20.00th=[29754], 00:11:19.087 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:11:19.087 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33817], 95.00th=[35914], 00:11:19.087 | 99.00th=[71828], 99.50th=[78119], 99.90th=[87557], 99.95th=[89654], 00:11:19.087 | 99.99th=[90702] 00:11:19.087 bw ( KiB/s): min=233939, max=525312, per=23.00%, avg=494453.74, stdev=75672.57, samples=19 00:11:19.087 iops : min= 913, max= 2052, avg=1931.32, stdev=295.72, samples=19 00:11:19.087 lat (msec) : 10=0.03%, 20=0.10%, 50=96.32%, 100=3.55% 00:11:19.087 cpu : usr=0.74%, sys=4.97%, ctx=4086, majf=0, minf=4097 00:11:19.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:19.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.087 issued rwts: total=19414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.087 job3: (groupid=0, jobs=1): err= 0: pid=66767: Mon Dec 2 07:38:42 2024 00:11:19.087 read: IOPS=676, BW=169MiB/s (177MB/s)(1703MiB/10061msec) 00:11:19.087 slat (usec): min=20, max=24841, avg=1464.12, stdev=3225.97 00:11:19.087 clat (msec): min=30, max=144, avg=92.96, stdev=12.36 00:11:19.087 lat (msec): min=30, max=144, avg=94.43, stdev=12.54 00:11:19.087 clat percentiles (msec): 00:11:19.087 | 1.00th=[ 71], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 85], 00:11:19.087 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:11:19.087 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 114], 95.00th=[ 120], 00:11:19.087 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 134], 99.95th=[ 144], 00:11:19.087 | 99.99th=[ 146] 00:11:19.087 bw ( KiB/s): min=137216, max=187904, per=8.03%, avg=172710.05, stdev=15605.12, samples=20 00:11:19.087 iops : min= 536, max= 734, avg=674.50, stdev=61.04, samples=20 00:11:19.087 lat (msec) : 50=0.29%, 100=79.35%, 250=20.35% 00:11:19.087 cpu : usr=0.40%, sys=2.59%, ctx=1504, majf=0, minf=4097 00:11:19.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:19.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.087 issued rwts: total=6810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.087 job4: (groupid=0, jobs=1): err= 0: pid=66768: Mon Dec 2 07:38:42 2024 00:11:19.087 read: IOPS=564, BW=141MiB/s (148MB/s)(1423MiB/10091msec) 00:11:19.087 slat (usec): min=19, max=28350, avg=1733.83, stdev=3775.08 00:11:19.087 clat (msec): min=25, max=195, avg=111.51, stdev=12.33 00:11:19.087 lat (msec): min=25, max=195, avg=113.25, stdev=12.59 00:11:19.087 clat percentiles (msec): 00:11:19.087 | 1.00th=[ 64], 5.00th=[ 90], 10.00th=[ 99], 20.00th=[ 107], 00:11:19.087 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 115], 00:11:19.087 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 126], 00:11:19.087 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 182], 99.95th=[ 197], 00:11:19.087 | 99.99th=[ 197] 00:11:19.087 bw ( KiB/s): min=134656, max=167246, per=6.70%, avg=144119.10, stdev=8113.71, samples=20 00:11:19.087 iops : min= 526, max= 653, avg=562.95, stdev=31.65, samples=20 00:11:19.087 lat (msec) : 50=0.67%, 100=10.28%, 250=89.05% 00:11:19.088 cpu : usr=0.29%, sys=1.90%, ctx=1415, majf=0, minf=4097 00:11:19.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:19.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.088 issued rwts: total=5692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.088 job5: (groupid=0, jobs=1): err= 0: pid=66769: Mon Dec 2 07:38:42 2024 00:11:19.088 read: IOPS=695, BW=174MiB/s (182MB/s)(1748MiB/10057msec) 00:11:19.088 slat (usec): min=20, max=36559, avg=1417.92, stdev=3205.96 00:11:19.088 clat (msec): min=41, max=140, avg=90.54, stdev=11.07 00:11:19.088 lat (msec): min=41, max=140, avg=91.96, stdev=11.17 00:11:19.088 clat percentiles (msec): 00:11:19.088 | 1.00th=[ 61], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 83], 00:11:19.088 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:11:19.088 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 104], 95.00th=[ 112], 00:11:19.088 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 136], 99.95th=[ 140], 00:11:19.088 | 99.99th=[ 140] 00:11:19.088 bw ( KiB/s): min=141824, max=185856, per=8.25%, avg=177294.65, stdev=12229.26, samples=20 00:11:19.088 iops : min= 554, max= 726, avg=692.50, stdev=47.74, samples=20 00:11:19.088 lat (msec) : 50=0.41%, 100=84.72%, 250=14.86% 00:11:19.088 cpu : usr=0.33%, sys=2.82%, ctx=1502, majf=0, minf=4097 00:11:19.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:19.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.088 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.088 job6: (groupid=0, jobs=1): err= 0: pid=66770: Mon Dec 2 07:38:42 2024 00:11:19.088 read: IOPS=696, BW=174MiB/s (183MB/s)(1752MiB/10059msec) 00:11:19.088 slat (usec): min=20, max=39016, avg=1422.82, stdev=3199.61 00:11:19.088 clat (msec): min=53, max=145, avg=90.33, stdev=10.26 00:11:19.088 lat (msec): min=53, max=148, avg=91.76, stdev=10.38 00:11:19.088 clat percentiles (msec): 00:11:19.088 | 1.00th=[ 68], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 84], 00:11:19.088 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:11:19.088 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 112], 00:11:19.088 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 146], 00:11:19.088 | 99.99th=[ 146] 00:11:19.088 bw ( KiB/s): min=140800, max=188416, per=8.27%, avg=177740.80, stdev=11891.16, samples=20 00:11:19.088 iops : min= 550, max= 736, avg=694.30, stdev=46.45, samples=20 00:11:19.088 lat (msec) : 100=88.04%, 250=11.96% 00:11:19.088 cpu : usr=0.40%, sys=2.94%, ctx=1539, majf=0, minf=4097 00:11:19.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:19.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.088 issued rwts: total=7006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.088 job7: (groupid=0, jobs=1): err= 0: pid=66771: Mon Dec 2 07:38:42 2024 00:11:19.088 read: IOPS=674, BW=169MiB/s (177MB/s)(1697MiB/10066msec) 00:11:19.088 slat (usec): min=20, max=23664, avg=1469.31, stdev=3237.63 00:11:19.088 clat (msec): min=26, max=145, avg=93.26, stdev=13.07 00:11:19.088 lat (msec): min=26, max=145, avg=94.73, stdev=13.25 00:11:19.088 clat percentiles (msec): 00:11:19.088 | 1.00th=[ 70], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 85], 00:11:19.088 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:11:19.088 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 114], 95.00th=[ 121], 00:11:19.088 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 146], 00:11:19.088 | 99.99th=[ 146] 00:11:19.088 bw ( KiB/s): min=132096, max=188928, per=8.01%, avg=172185.60, stdev=16948.67, samples=20 00:11:19.088 iops : min= 516, max= 738, avg=672.60, stdev=66.21, samples=20 00:11:19.088 lat (msec) : 50=0.37%, 100=78.95%, 250=20.68% 00:11:19.088 cpu : usr=0.35%, sys=2.40%, ctx=1516, majf=0, minf=4097 00:11:19.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:19.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.088 issued rwts: total=6789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.088 job8: (groupid=0, jobs=1): err= 0: pid=66772: Mon Dec 2 07:38:42 2024 00:11:19.088 read: IOPS=582, BW=146MiB/s (153MB/s)(1468MiB/10090msec) 00:11:19.088 slat (usec): min=20, max=31099, avg=1680.34, stdev=3791.27 00:11:19.088 clat (msec): min=7, max=196, avg=108.12, stdev=17.51 00:11:19.088 lat (msec): min=7, max=196, avg=109.80, stdev=17.80 00:11:19.088 clat percentiles (msec): 00:11:19.088 | 1.00th=[ 37], 5.00th=[ 78], 10.00th=[ 87], 20.00th=[ 99], 00:11:19.088 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 115], 00:11:19.088 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 127], 00:11:19.088 | 99.00th=[ 136], 99.50th=[ 159], 99.90th=[ 194], 99.95th=[ 197], 00:11:19.088 | 99.99th=[ 197] 00:11:19.088 bw ( KiB/s): min=137216, max=186368, per=6.92%, avg=148751.15, stdev=16177.15, samples=20 00:11:19.088 iops : min= 536, max= 728, avg=581.05, stdev=63.19, samples=20 00:11:19.088 lat (msec) : 10=0.20%, 20=0.15%, 50=1.14%, 100=19.58%, 250=78.92% 00:11:19.088 cpu : usr=0.21%, sys=1.72%, ctx=1506, majf=0, minf=4097 00:11:19.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:19.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.088 issued rwts: total=5873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.088 job9: (groupid=0, jobs=1): err= 0: pid=66773: Mon Dec 2 07:38:42 2024 00:11:19.088 read: IOPS=696, BW=174MiB/s (183MB/s)(1753MiB/10068msec) 00:11:19.088 slat (usec): min=20, max=49995, avg=1416.96, stdev=3168.07 00:11:19.088 clat (msec): min=21, max=147, avg=90.32, stdev=15.98 00:11:19.088 lat (msec): min=23, max=161, avg=91.73, stdev=16.18 00:11:19.088 clat percentiles (msec): 00:11:19.088 | 1.00th=[ 52], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 81], 00:11:19.088 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 93], 00:11:19.088 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 114], 95.00th=[ 121], 00:11:19.088 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 144], 00:11:19.088 | 99.99th=[ 148] 00:11:19.088 bw ( KiB/s): min=131584, max=228864, per=8.27%, avg=177868.80, stdev=22777.96, samples=20 00:11:19.088 iops : min= 514, max= 894, avg=694.80, stdev=88.98, samples=20 00:11:19.088 lat (msec) : 50=0.86%, 100=78.72%, 250=20.43% 00:11:19.088 cpu : usr=0.25%, sys=1.98%, ctx=1740, majf=0, minf=4097 00:11:19.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:19.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.088 issued rwts: total=7011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.088 job10: (groupid=0, jobs=1): err= 0: pid=66774: Mon Dec 2 07:38:42 2024 00:11:19.088 read: IOPS=604, BW=151MiB/s (158MB/s)(1524MiB/10087msec) 00:11:19.088 slat (usec): min=20, max=32640, avg=1635.74, stdev=3577.67 00:11:19.088 clat (msec): min=19, max=200, avg=104.08, stdev=19.33 00:11:19.088 lat (msec): min=20, max=202, avg=105.71, stdev=19.60 00:11:19.088 clat percentiles (msec): 00:11:19.088 | 1.00th=[ 56], 5.00th=[ 63], 10.00th=[ 73], 20.00th=[ 88], 00:11:19.088 | 30.00th=[ 100], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 113], 00:11:19.088 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 125], 00:11:19.088 | 99.00th=[ 134], 99.50th=[ 155], 99.90th=[ 186], 99.95th=[ 186], 00:11:19.088 | 99.99th=[ 201] 00:11:19.088 bw ( KiB/s): min=137216, max=238080, per=7.18%, avg=154470.40, stdev=27236.28, samples=20 00:11:19.088 iops : min= 536, max= 930, avg=603.40, stdev=106.39, samples=20 00:11:19.088 lat (msec) : 20=0.02%, 50=0.16%, 100=30.28%, 250=69.54% 00:11:19.088 cpu : usr=0.35%, sys=2.36%, ctx=1377, majf=0, minf=4097 00:11:19.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:19.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:19.088 issued rwts: total=6097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:19.088 00:11:19.088 Run status group 0 (all jobs): 00:11:19.088 READ: bw=2100MiB/s (2202MB/s), 141MiB/s-485MiB/s (148MB/s-509MB/s), io=20.7GiB (22.2GB), run=10008-10091msec 00:11:19.088 00:11:19.088 Disk stats (read/write): 00:11:19.088 nvme0n1: ios=13849/0, merge=0/0, ticks=1233947/0, in_queue=1233947, util=97.80% 00:11:19.088 nvme10n1: ios=12053/0, merge=0/0, ticks=1231363/0, in_queue=1231363, util=97.96% 00:11:19.088 nvme1n1: ios=37741/0, merge=0/0, ticks=1211627/0, in_queue=1211627, util=98.13% 00:11:19.088 nvme2n1: ios=13506/0, merge=0/0, ticks=1234113/0, in_queue=1234113, util=98.17% 00:11:19.088 nvme3n1: ios=11270/0, merge=0/0, ticks=1229860/0, in_queue=1229860, util=98.34% 00:11:19.088 nvme4n1: ios=13864/0, merge=0/0, ticks=1236006/0, in_queue=1236006, util=98.48% 00:11:19.088 nvme5n1: ios=13905/0, merge=0/0, ticks=1235932/0, in_queue=1235932, util=98.63% 00:11:19.088 nvme6n1: ios=13475/0, merge=0/0, ticks=1234927/0, in_queue=1234927, util=98.66% 00:11:19.088 nvme7n1: ios=11643/0, merge=0/0, ticks=1230869/0, in_queue=1230869, util=98.95% 00:11:19.088 nvme8n1: ios=13923/0, merge=0/0, ticks=1237167/0, in_queue=1237167, util=99.04% 00:11:19.088 nvme9n1: ios=12089/0, merge=0/0, ticks=1231500/0, in_queue=1231500, util=99.13% 00:11:19.088 07:38:42 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:11:19.088 [global] 00:11:19.088 thread=1 00:11:19.088 invalidate=1 00:11:19.088 rw=randwrite 00:11:19.088 time_based=1 00:11:19.088 runtime=10 00:11:19.088 ioengine=libaio 00:11:19.088 direct=1 00:11:19.088 bs=262144 00:11:19.088 iodepth=64 00:11:19.088 norandommap=1 00:11:19.088 numjobs=1 00:11:19.088 00:11:19.088 [job0] 00:11:19.088 filename=/dev/nvme0n1 00:11:19.088 [job1] 00:11:19.088 filename=/dev/nvme10n1 00:11:19.088 [job2] 00:11:19.088 filename=/dev/nvme1n1 00:11:19.088 [job3] 00:11:19.089 filename=/dev/nvme2n1 00:11:19.089 [job4] 00:11:19.089 filename=/dev/nvme3n1 00:11:19.089 [job5] 00:11:19.089 filename=/dev/nvme4n1 00:11:19.089 [job6] 00:11:19.089 filename=/dev/nvme5n1 00:11:19.089 [job7] 00:11:19.089 filename=/dev/nvme6n1 00:11:19.089 [job8] 00:11:19.089 filename=/dev/nvme7n1 00:11:19.089 [job9] 00:11:19.089 filename=/dev/nvme8n1 00:11:19.089 [job10] 00:11:19.089 filename=/dev/nvme9n1 00:11:19.089 Could not set queue depth (nvme0n1) 00:11:19.089 Could not set queue depth (nvme10n1) 00:11:19.089 Could not set queue depth (nvme1n1) 00:11:19.089 Could not set queue depth (nvme2n1) 00:11:19.089 Could not set queue depth (nvme3n1) 00:11:19.089 Could not set queue depth (nvme4n1) 00:11:19.089 Could not set queue depth (nvme5n1) 00:11:19.089 Could not set queue depth (nvme6n1) 00:11:19.089 Could not set queue depth (nvme7n1) 00:11:19.089 Could not set queue depth (nvme8n1) 00:11:19.089 Could not set queue depth (nvme9n1) 00:11:19.089 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:19.089 fio-3.35 00:11:19.089 Starting 11 threads 00:11:29.067 00:11:29.067 job0: (groupid=0, jobs=1): err= 0: pid=66968: Mon Dec 2 07:38:53 2024 00:11:29.067 write: IOPS=397, BW=99.3MiB/s (104MB/s)(1007MiB/10143msec); 0 zone resets 00:11:29.067 slat (usec): min=17, max=54808, avg=2478.04, stdev=4331.11 00:11:29.067 clat (msec): min=16, max=301, avg=158.62, stdev=19.16 00:11:29.067 lat (msec): min=16, max=301, avg=161.09, stdev=18.94 00:11:29.067 clat percentiles (msec): 00:11:29.067 | 1.00th=[ 63], 5.00th=[ 150], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.067 | 30.00th=[ 159], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:11:29.067 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:11:29.067 | 99.00th=[ 232], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:11:29.067 | 99.99th=[ 300] 00:11:29.067 bw ( KiB/s): min=92160, max=104448, per=7.02%, avg=101493.95, stdev=2523.80, samples=20 00:11:29.067 iops : min= 360, max= 408, avg=396.45, stdev= 9.86, samples=20 00:11:29.067 lat (msec) : 20=0.20%, 50=0.60%, 100=0.70%, 250=97.96%, 500=0.55% 00:11:29.067 cpu : usr=0.60%, sys=1.12%, ctx=4414, majf=0, minf=1 00:11:29.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:29.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.067 issued rwts: total=0,4028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.067 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.067 job1: (groupid=0, jobs=1): err= 0: pid=66969: Mon Dec 2 07:38:53 2024 00:11:29.067 write: IOPS=1112, BW=278MiB/s (292MB/s)(2795MiB/10051msec); 0 zone resets 00:11:29.067 slat (usec): min=17, max=7757, avg=889.82, stdev=1486.50 00:11:29.067 clat (msec): min=7, max=107, avg=56.64, stdev= 4.56 00:11:29.067 lat (msec): min=7, max=108, avg=57.53, stdev= 4.42 00:11:29.067 clat percentiles (msec): 00:11:29.067 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 55], 00:11:29.067 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 57], 00:11:29.067 | 70.00th=[ 58], 80.00th=[ 58], 90.00th=[ 59], 95.00th=[ 61], 00:11:29.067 | 99.00th=[ 80], 99.50th=[ 90], 99.90th=[ 97], 99.95th=[ 105], 00:11:29.067 | 99.99th=[ 108] 00:11:29.067 bw ( KiB/s): min=247824, max=292352, per=19.67%, avg=284462.20, stdev=9152.75, samples=20 00:11:29.067 iops : min= 968, max= 1142, avg=1111.10, stdev=35.77, samples=20 00:11:29.067 lat (msec) : 10=0.04%, 20=0.11%, 50=0.19%, 100=99.58%, 250=0.09% 00:11:29.067 cpu : usr=1.57%, sys=2.66%, ctx=14759, majf=0, minf=1 00:11:29.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:29.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.067 issued rwts: total=0,11178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.067 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.067 job2: (groupid=0, jobs=1): err= 0: pid=66981: Mon Dec 2 07:38:53 2024 00:11:29.067 write: IOPS=402, BW=101MiB/s (105MB/s)(1020MiB/10146msec); 0 zone resets 00:11:29.067 slat (usec): min=17, max=14927, avg=2428.72, stdev=4229.15 00:11:29.067 clat (msec): min=7, max=297, avg=156.57, stdev=19.85 00:11:29.067 lat (msec): min=7, max=297, avg=159.00, stdev=19.71 00:11:29.067 clat percentiles (msec): 00:11:29.067 | 1.00th=[ 59], 5.00th=[ 148], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.067 | 30.00th=[ 159], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:11:29.067 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:11:29.067 | 99.00th=[ 201], 99.50th=[ 249], 99.90th=[ 288], 99.95th=[ 288], 00:11:29.067 | 99.99th=[ 296] 00:11:29.067 bw ( KiB/s): min=100352, max=116502, per=7.11%, avg=102828.60, stdev=3295.02, samples=20 00:11:29.067 iops : min= 392, max= 455, avg=401.65, stdev=12.86, samples=20 00:11:29.067 lat (msec) : 10=0.05%, 20=0.20%, 50=0.59%, 100=2.11%, 250=96.62% 00:11:29.067 lat (msec) : 500=0.44% 00:11:29.067 cpu : usr=0.69%, sys=1.23%, ctx=3570, majf=0, minf=1 00:11:29.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:29.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.067 issued rwts: total=0,4081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.067 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.067 job3: (groupid=0, jobs=1): err= 0: pid=66982: Mon Dec 2 07:38:53 2024 00:11:29.067 write: IOPS=397, BW=99.3MiB/s (104MB/s)(1007MiB/10136msec); 0 zone resets 00:11:29.067 slat (usec): min=17, max=32472, avg=2477.54, stdev=4283.79 00:11:29.067 clat (msec): min=12, max=296, avg=158.51, stdev=16.41 00:11:29.067 lat (msec): min=12, max=296, avg=160.98, stdev=16.10 00:11:29.067 clat percentiles (msec): 00:11:29.067 | 1.00th=[ 83], 5.00th=[ 150], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.067 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 161], 00:11:29.067 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 165], 00:11:29.067 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 288], 99.95th=[ 288], 00:11:29.067 | 99.99th=[ 296] 00:11:29.067 bw ( KiB/s): min=98618, max=106496, per=7.02%, avg=101484.05, stdev=1668.81, samples=20 00:11:29.067 iops : min= 385, max= 416, avg=396.40, stdev= 6.55, samples=20 00:11:29.067 lat (msec) : 20=0.10%, 50=0.50%, 100=0.70%, 250=98.26%, 500=0.45% 00:11:29.067 cpu : usr=0.72%, sys=0.95%, ctx=4279, majf=0, minf=1 00:11:29.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:29.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.067 issued rwts: total=0,4028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.067 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 job4: (groupid=0, jobs=1): err= 0: pid=66983: Mon Dec 2 07:38:53 2024 00:11:29.068 write: IOPS=398, BW=99.6MiB/s (104MB/s)(1010MiB/10146msec); 0 zone resets 00:11:29.068 slat (usec): min=19, max=31424, avg=2470.27, stdev=4256.38 00:11:29.068 clat (msec): min=17, max=299, avg=158.15, stdev=16.04 00:11:29.068 lat (msec): min=17, max=299, avg=160.62, stdev=15.71 00:11:29.068 clat percentiles (msec): 00:11:29.068 | 1.00th=[ 88], 5.00th=[ 150], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.068 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 161], 00:11:29.068 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:11:29.068 | 99.00th=[ 199], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:11:29.068 | 99.99th=[ 300] 00:11:29.068 bw ( KiB/s): min=100151, max=108544, per=7.04%, avg=101816.70, stdev=1903.56, samples=20 00:11:29.068 iops : min= 391, max= 424, avg=397.70, stdev= 7.46, samples=20 00:11:29.068 lat (msec) : 20=0.10%, 50=0.40%, 100=0.69%, 250=98.37%, 500=0.45% 00:11:29.068 cpu : usr=0.69%, sys=1.32%, ctx=5855, majf=0, minf=1 00:11:29.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.068 issued rwts: total=0,4041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.068 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 job5: (groupid=0, jobs=1): err= 0: pid=66984: Mon Dec 2 07:38:53 2024 00:11:29.068 write: IOPS=398, BW=99.7MiB/s (105MB/s)(1011MiB/10142msec); 0 zone resets 00:11:29.068 slat (usec): min=17, max=14971, avg=2466.84, stdev=4248.62 00:11:29.068 clat (msec): min=14, max=299, avg=157.93, stdev=17.31 00:11:29.068 lat (msec): min=14, max=299, avg=160.40, stdev=17.04 00:11:29.068 clat percentiles (msec): 00:11:29.068 | 1.00th=[ 85], 5.00th=[ 150], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.068 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 163], 00:11:29.068 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 165], 00:11:29.068 | 99.00th=[ 199], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:11:29.068 | 99.99th=[ 300] 00:11:29.068 bw ( KiB/s): min=98816, max=114688, per=7.05%, avg=101918.90, stdev=3280.07, samples=20 00:11:29.068 iops : min= 386, max= 448, avg=398.10, stdev=12.82, samples=20 00:11:29.068 lat (msec) : 20=0.10%, 50=0.40%, 100=0.79%, 250=98.17%, 500=0.54% 00:11:29.068 cpu : usr=0.70%, sys=1.25%, ctx=4174, majf=0, minf=1 00:11:29.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.068 issued rwts: total=0,4045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.068 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 job6: (groupid=0, jobs=1): err= 0: pid=66986: Mon Dec 2 07:38:53 2024 00:11:29.068 write: IOPS=679, BW=170MiB/s (178MB/s)(1714MiB/10088msec); 0 zone resets 00:11:29.068 slat (usec): min=17, max=106818, avg=1424.28, stdev=2739.24 00:11:29.068 clat (msec): min=17, max=281, avg=92.74, stdev=16.84 00:11:29.068 lat (msec): min=17, max=284, avg=94.16, stdev=16.87 00:11:29.068 clat percentiles (msec): 00:11:29.068 | 1.00th=[ 43], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 89], 00:11:29.068 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 93], 60.00th=[ 93], 00:11:29.068 | 70.00th=[ 94], 80.00th=[ 94], 90.00th=[ 95], 95.00th=[ 95], 00:11:29.068 | 99.00th=[ 184], 99.50th=[ 226], 99.90th=[ 264], 99.95th=[ 271], 00:11:29.068 | 99.99th=[ 284] 00:11:29.068 bw ( KiB/s): min=139543, max=178176, per=12.02%, avg=173846.15, stdev=8236.91, samples=20 00:11:29.068 iops : min= 545, max= 696, avg=679.05, stdev=32.20, samples=20 00:11:29.068 lat (msec) : 20=0.12%, 50=1.08%, 100=96.51%, 250=2.13%, 500=0.16% 00:11:29.068 cpu : usr=1.11%, sys=1.84%, ctx=9789, majf=0, minf=1 00:11:29.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.068 issued rwts: total=0,6854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.068 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 job7: (groupid=0, jobs=1): err= 0: pid=66991: Mon Dec 2 07:38:53 2024 00:11:29.068 write: IOPS=397, BW=99.3MiB/s (104MB/s)(1007MiB/10144msec); 0 zone resets 00:11:29.068 slat (usec): min=18, max=32993, avg=2476.75, stdev=4282.91 00:11:29.068 clat (msec): min=35, max=299, avg=158.60, stdev=16.11 00:11:29.068 lat (msec): min=35, max=299, avg=161.08, stdev=15.77 00:11:29.068 clat percentiles (msec): 00:11:29.068 | 1.00th=[ 101], 5.00th=[ 150], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.068 | 30.00th=[ 159], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:11:29.068 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:11:29.068 | 99.00th=[ 213], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:11:29.068 | 99.99th=[ 300] 00:11:29.068 bw ( KiB/s): min=92160, max=102912, per=7.02%, avg=101519.50, stdev=2335.59, samples=20 00:11:29.068 iops : min= 360, max= 402, avg=396.55, stdev= 9.13, samples=20 00:11:29.068 lat (msec) : 50=0.30%, 100=0.69%, 250=98.46%, 500=0.55% 00:11:29.068 cpu : usr=0.72%, sys=1.24%, ctx=4859, majf=0, minf=1 00:11:29.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.068 issued rwts: total=0,4029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.068 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 job8: (groupid=0, jobs=1): err= 0: pid=66993: Mon Dec 2 07:38:53 2024 00:11:29.068 write: IOPS=406, BW=102MiB/s (106MB/s)(1029MiB/10134msec); 0 zone resets 00:11:29.068 slat (usec): min=16, max=12710, avg=2369.03, stdev=4175.36 00:11:29.068 clat (msec): min=10, max=294, avg=155.17, stdev=22.56 00:11:29.068 lat (msec): min=10, max=294, avg=157.54, stdev=22.63 00:11:29.068 clat percentiles (msec): 00:11:29.068 | 1.00th=[ 48], 5.00th=[ 115], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.068 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 161], 00:11:29.068 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:11:29.068 | 99.00th=[ 192], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:11:29.068 | 99.99th=[ 296] 00:11:29.068 bw ( KiB/s): min=98816, max=147968, per=7.17%, avg=103731.20, stdev=10482.88, samples=20 00:11:29.068 iops : min= 386, max= 578, avg=405.20, stdev=40.95, samples=20 00:11:29.068 lat (msec) : 20=0.29%, 50=0.78%, 100=3.33%, 250=95.16%, 500=0.44% 00:11:29.068 cpu : usr=0.82%, sys=1.28%, ctx=5929, majf=0, minf=1 00:11:29.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.068 issued rwts: total=0,4115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.068 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 job9: (groupid=0, jobs=1): err= 0: pid=66994: Mon Dec 2 07:38:53 2024 00:11:29.068 write: IOPS=686, BW=172MiB/s (180MB/s)(1732MiB/10084msec); 0 zone resets 00:11:29.068 slat (usec): min=16, max=44500, avg=1439.77, stdev=2483.92 00:11:29.068 clat (msec): min=46, max=171, avg=91.70, stdev= 6.33 00:11:29.068 lat (msec): min=46, max=171, avg=93.14, stdev= 5.91 00:11:29.068 clat percentiles (msec): 00:11:29.068 | 1.00th=[ 66], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 89], 00:11:29.068 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 93], 60.00th=[ 93], 00:11:29.068 | 70.00th=[ 94], 80.00th=[ 94], 90.00th=[ 95], 95.00th=[ 95], 00:11:29.068 | 99.00th=[ 102], 99.50th=[ 125], 99.90th=[ 159], 99.95th=[ 165], 00:11:29.068 | 99.99th=[ 171] 00:11:29.068 bw ( KiB/s): min=172544, max=178176, per=12.15%, avg=175736.00, stdev=1422.92, samples=20 00:11:29.068 iops : min= 674, max= 696, avg=686.45, stdev= 5.55, samples=20 00:11:29.068 lat (msec) : 50=0.04%, 100=98.90%, 250=1.05% 00:11:29.068 cpu : usr=1.17%, sys=1.42%, ctx=9111, majf=0, minf=1 00:11:29.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.068 issued rwts: total=0,6927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.068 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 job10: (groupid=0, jobs=1): err= 0: pid=66995: Mon Dec 2 07:38:53 2024 00:11:29.068 write: IOPS=393, BW=98.4MiB/s (103MB/s)(998MiB/10143msec); 0 zone resets 00:11:29.068 slat (usec): min=19, max=72965, avg=2500.04, stdev=4445.49 00:11:29.068 clat (msec): min=75, max=299, avg=160.09, stdev=15.24 00:11:29.068 lat (msec): min=75, max=299, avg=162.59, stdev=14.79 00:11:29.068 clat percentiles (msec): 00:11:29.068 | 1.00th=[ 148], 5.00th=[ 150], 10.00th=[ 150], 20.00th=[ 153], 00:11:29.068 | 30.00th=[ 159], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:11:29.068 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:11:29.068 | 99.00th=[ 236], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 300], 00:11:29.068 | 99.99th=[ 300] 00:11:29.068 bw ( KiB/s): min=71168, max=104448, per=6.95%, avg=100536.30, stdev=7011.54, samples=20 00:11:29.068 iops : min= 278, max= 408, avg=392.70, stdev=27.38, samples=20 00:11:29.068 lat (msec) : 100=0.28%, 250=99.17%, 500=0.55% 00:11:29.068 cpu : usr=0.81%, sys=1.21%, ctx=4326, majf=0, minf=1 00:11:29.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:29.068 issued rwts: total=0,3991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.068 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:29.068 00:11:29.068 Run status group 0 (all jobs): 00:11:29.068 WRITE: bw=1412MiB/s (1481MB/s), 98.4MiB/s-278MiB/s (103MB/s-292MB/s), io=14.0GiB (15.0GB), run=10051-10146msec 00:11:29.068 00:11:29.068 Disk stats (read/write): 00:11:29.068 nvme0n1: ios=50/7921, merge=0/0, ticks=62/1211990, in_queue=1212052, util=97.94% 00:11:29.068 nvme10n1: ios=49/22228, merge=0/0, ticks=54/1219650, in_queue=1219704, util=98.06% 00:11:29.068 nvme1n1: ios=48/8019, merge=0/0, ticks=45/1211496, in_queue=1211541, util=98.02% 00:11:29.068 nvme2n1: ios=31/7917, merge=0/0, ticks=18/1210500, in_queue=1210518, util=97.93% 00:11:29.068 nvme3n1: ios=22/7948, merge=0/0, ticks=24/1212533, in_queue=1212557, util=98.14% 00:11:29.068 nvme4n1: ios=0/7957, merge=0/0, ticks=0/1211842, in_queue=1211842, util=98.24% 00:11:29.069 nvme5n1: ios=0/13560, merge=0/0, ticks=0/1216782, in_queue=1216782, util=98.32% 00:11:29.069 nvme6n1: ios=0/7922, merge=0/0, ticks=0/1212031, in_queue=1212031, util=98.41% 00:11:29.069 nvme7n1: ios=0/8089, merge=0/0, ticks=0/1211557, in_queue=1211557, util=98.59% 00:11:29.069 nvme8n1: ios=0/13690, merge=0/0, ticks=0/1214506, in_queue=1214506, util=98.67% 00:11:29.069 nvme9n1: ios=0/7845, merge=0/0, ticks=0/1212037, in_queue=1212037, util=98.85% 00:11:29.069 07:38:53 -- target/multiconnection.sh@36 -- # sync 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # seq 1 11 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.069 07:38:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:11:29.069 07:38:53 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:11:29.069 07:38:53 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.069 07:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:11:29.069 07:38:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:11:29.069 07:38:53 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:11:29.069 07:38:53 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:29.069 07:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:11:29.069 07:38:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:11:29.069 07:38:53 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:11:29.069 07:38:53 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:29.069 07:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:11:29.069 07:38:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:11:29.069 07:38:53 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:11:29.069 07:38:53 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:29.069 07:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:11:29.069 07:38:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:11:29.069 07:38:53 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:11:29.069 07:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:11:29.069 07:38:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:11:29.069 07:38:53 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:11:29.069 07:38:53 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:11:29.069 07:38:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:11:29.069 07:38:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:11:29.069 07:38:53 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:11:29.069 07:38:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:11:29.069 07:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:11:29.069 07:38:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:11:29.069 07:38:54 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:11:29.069 07:38:54 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:11:29.069 07:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:11:29.069 07:38:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:11:29.069 07:38:54 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:11:29.069 07:38:54 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:11:29.069 07:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.069 07:38:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.069 07:38:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:11:29.069 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:11:29.069 07:38:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:11:29.069 07:38:54 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.069 07:38:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.069 07:38:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:11:29.069 07:38:54 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.069 07:38:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:11:29.069 07:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.069 07:38:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.069 07:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.070 07:38:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:29.070 07:38:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:11:29.070 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:11:29.070 07:38:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:11:29.070 07:38:54 -- common/autotest_common.sh@1208 -- # local i=0 00:11:29.070 07:38:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:29.070 07:38:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:11:29.070 07:38:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:29.070 07:38:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:11:29.070 07:38:54 -- common/autotest_common.sh@1220 -- # return 0 00:11:29.070 07:38:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:11:29.070 07:38:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.070 07:38:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.070 07:38:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.070 07:38:54 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:11:29.070 07:38:54 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:29.070 07:38:54 -- target/multiconnection.sh@47 -- # nvmftestfini 00:11:29.070 07:38:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:29.070 07:38:54 -- nvmf/common.sh@116 -- # sync 00:11:29.070 07:38:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:29.070 07:38:54 -- nvmf/common.sh@119 -- # set +e 00:11:29.070 07:38:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:29.070 07:38:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:29.070 rmmod nvme_tcp 00:11:29.070 rmmod nvme_fabrics 00:11:29.070 rmmod nvme_keyring 00:11:29.070 07:38:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:29.070 07:38:54 -- nvmf/common.sh@123 -- # set -e 00:11:29.070 07:38:54 -- nvmf/common.sh@124 -- # return 0 00:11:29.070 07:38:54 -- nvmf/common.sh@477 -- # '[' -n 66301 ']' 00:11:29.070 07:38:54 -- nvmf/common.sh@478 -- # killprocess 66301 00:11:29.070 07:38:54 -- common/autotest_common.sh@936 -- # '[' -z 66301 ']' 00:11:29.070 07:38:54 -- common/autotest_common.sh@940 -- # kill -0 66301 00:11:29.070 07:38:54 -- common/autotest_common.sh@941 -- # uname 00:11:29.070 07:38:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:29.070 07:38:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66301 00:11:29.070 07:38:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:29.070 killing process with pid 66301 00:11:29.070 07:38:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:29.070 07:38:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66301' 00:11:29.070 07:38:54 -- common/autotest_common.sh@955 -- # kill 66301 00:11:29.070 07:38:54 -- common/autotest_common.sh@960 -- # wait 66301 00:11:29.329 07:38:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:29.329 07:38:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:29.329 07:38:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:29.329 07:38:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.329 07:38:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:29.329 07:38:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.329 07:38:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.329 07:38:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.329 07:38:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:29.329 00:11:29.329 real 0m48.869s 00:11:29.329 user 2m38.458s 00:11:29.329 sys 0m35.942s 00:11:29.329 ************************************ 00:11:29.329 END TEST nvmf_multiconnection 00:11:29.329 ************************************ 00:11:29.329 07:38:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:29.329 07:38:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.329 07:38:54 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:11:29.329 07:38:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:29.329 07:38:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.329 07:38:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.589 ************************************ 00:11:29.589 START TEST nvmf_initiator_timeout 00:11:29.589 ************************************ 00:11:29.589 07:38:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:11:29.589 * Looking for test storage... 00:11:29.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.589 07:38:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:29.589 07:38:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:29.589 07:38:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:29.589 07:38:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:29.589 07:38:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:29.589 07:38:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:29.589 07:38:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:29.589 07:38:55 -- scripts/common.sh@335 -- # IFS=.-: 00:11:29.589 07:38:55 -- scripts/common.sh@335 -- # read -ra ver1 00:11:29.589 07:38:55 -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.589 07:38:55 -- scripts/common.sh@336 -- # read -ra ver2 00:11:29.589 07:38:55 -- scripts/common.sh@337 -- # local 'op=<' 00:11:29.589 07:38:55 -- scripts/common.sh@339 -- # ver1_l=2 00:11:29.589 07:38:55 -- scripts/common.sh@340 -- # ver2_l=1 00:11:29.589 07:38:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:29.589 07:38:55 -- scripts/common.sh@343 -- # case "$op" in 00:11:29.589 07:38:55 -- scripts/common.sh@344 -- # : 1 00:11:29.589 07:38:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:29.589 07:38:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.589 07:38:55 -- scripts/common.sh@364 -- # decimal 1 00:11:29.589 07:38:55 -- scripts/common.sh@352 -- # local d=1 00:11:29.590 07:38:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.590 07:38:55 -- scripts/common.sh@354 -- # echo 1 00:11:29.590 07:38:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:29.590 07:38:55 -- scripts/common.sh@365 -- # decimal 2 00:11:29.590 07:38:55 -- scripts/common.sh@352 -- # local d=2 00:11:29.590 07:38:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.590 07:38:55 -- scripts/common.sh@354 -- # echo 2 00:11:29.590 07:38:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:29.590 07:38:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:29.590 07:38:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:29.590 07:38:55 -- scripts/common.sh@367 -- # return 0 00:11:29.590 07:38:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.590 07:38:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:29.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.590 --rc genhtml_branch_coverage=1 00:11:29.590 --rc genhtml_function_coverage=1 00:11:29.590 --rc genhtml_legend=1 00:11:29.590 --rc geninfo_all_blocks=1 00:11:29.590 --rc geninfo_unexecuted_blocks=1 00:11:29.590 00:11:29.590 ' 00:11:29.590 07:38:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:29.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.590 --rc genhtml_branch_coverage=1 00:11:29.590 --rc genhtml_function_coverage=1 00:11:29.590 --rc genhtml_legend=1 00:11:29.590 --rc geninfo_all_blocks=1 00:11:29.590 --rc geninfo_unexecuted_blocks=1 00:11:29.590 00:11:29.590 ' 00:11:29.590 07:38:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:29.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.590 --rc genhtml_branch_coverage=1 00:11:29.590 --rc genhtml_function_coverage=1 00:11:29.590 --rc genhtml_legend=1 00:11:29.590 --rc geninfo_all_blocks=1 00:11:29.590 --rc geninfo_unexecuted_blocks=1 00:11:29.590 00:11:29.590 ' 00:11:29.590 07:38:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:29.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.590 --rc genhtml_branch_coverage=1 00:11:29.590 --rc genhtml_function_coverage=1 00:11:29.590 --rc genhtml_legend=1 00:11:29.590 --rc geninfo_all_blocks=1 00:11:29.590 --rc geninfo_unexecuted_blocks=1 00:11:29.590 00:11:29.590 ' 00:11:29.590 07:38:55 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.590 07:38:55 -- nvmf/common.sh@7 -- # uname -s 00:11:29.590 07:38:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.590 07:38:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.590 07:38:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.590 07:38:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.590 07:38:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.590 07:38:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.590 07:38:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.590 07:38:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.590 07:38:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.590 07:38:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.590 07:38:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:11:29.590 07:38:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:11:29.590 07:38:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.590 07:38:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.590 07:38:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.590 07:38:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.590 07:38:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.590 07:38:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.590 07:38:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.590 07:38:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.590 07:38:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.590 07:38:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.590 07:38:55 -- paths/export.sh@5 -- # export PATH 00:11:29.590 07:38:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.590 07:38:55 -- nvmf/common.sh@46 -- # : 0 00:11:29.590 07:38:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:29.590 07:38:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:29.590 07:38:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:29.590 07:38:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.590 07:38:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.590 07:38:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:29.590 07:38:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:29.590 07:38:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:29.590 07:38:55 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.590 07:38:55 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.590 07:38:55 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:11:29.590 07:38:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:29.590 07:38:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.590 07:38:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:29.590 07:38:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:29.590 07:38:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:29.590 07:38:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.590 07:38:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.590 07:38:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.590 07:38:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:29.590 07:38:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:29.590 07:38:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:29.590 07:38:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:29.590 07:38:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:29.590 07:38:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:29.590 07:38:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.590 07:38:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.590 07:38:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:29.590 07:38:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:29.590 07:38:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.590 07:38:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.590 07:38:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.590 07:38:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.590 07:38:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.590 07:38:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.590 07:38:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.590 07:38:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.590 07:38:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:29.590 07:38:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:29.590 Cannot find device "nvmf_tgt_br" 00:11:29.590 07:38:55 -- nvmf/common.sh@154 -- # true 00:11:29.590 07:38:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.590 Cannot find device "nvmf_tgt_br2" 00:11:29.590 07:38:55 -- nvmf/common.sh@155 -- # true 00:11:29.590 07:38:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:29.590 07:38:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:29.849 Cannot find device "nvmf_tgt_br" 00:11:29.849 07:38:55 -- nvmf/common.sh@157 -- # true 00:11:29.849 07:38:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:29.849 Cannot find device "nvmf_tgt_br2" 00:11:29.849 07:38:55 -- nvmf/common.sh@158 -- # true 00:11:29.849 07:38:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:29.849 07:38:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:29.849 07:38:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.849 07:38:55 -- nvmf/common.sh@161 -- # true 00:11:29.849 07:38:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.849 07:38:55 -- nvmf/common.sh@162 -- # true 00:11:29.849 07:38:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.849 07:38:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.849 07:38:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.849 07:38:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.849 07:38:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.849 07:38:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.849 07:38:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.849 07:38:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:29.849 07:38:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:29.849 07:38:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:29.849 07:38:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:29.849 07:38:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:29.849 07:38:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:29.849 07:38:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.849 07:38:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.849 07:38:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.849 07:38:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:29.849 07:38:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:29.849 07:38:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.849 07:38:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.849 07:38:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.849 07:38:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.849 07:38:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.849 07:38:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:29.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:11:29.849 00:11:29.849 --- 10.0.0.2 ping statistics --- 00:11:29.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.849 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:29.849 07:38:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:29.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:11:29.849 00:11:29.849 --- 10.0.0.3 ping statistics --- 00:11:29.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.850 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:29.850 07:38:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:29.850 00:11:29.850 --- 10.0.0.1 ping statistics --- 00:11:29.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.850 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:29.850 07:38:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.850 07:38:55 -- nvmf/common.sh@421 -- # return 0 00:11:29.850 07:38:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:29.850 07:38:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.850 07:38:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:29.850 07:38:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:29.850 07:38:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.850 07:38:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:29.850 07:38:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:29.850 07:38:55 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:11:29.850 07:38:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:29.850 07:38:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.850 07:38:55 -- common/autotest_common.sh@10 -- # set +x 00:11:29.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.850 07:38:55 -- nvmf/common.sh@469 -- # nvmfpid=67368 00:11:29.850 07:38:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.850 07:38:55 -- nvmf/common.sh@470 -- # waitforlisten 67368 00:11:29.850 07:38:55 -- common/autotest_common.sh@829 -- # '[' -z 67368 ']' 00:11:29.850 07:38:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.850 07:38:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.850 07:38:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.850 07:38:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.850 07:38:55 -- common/autotest_common.sh@10 -- # set +x 00:11:30.108 [2024-12-02 07:38:55.528179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:30.108 [2024-12-02 07:38:55.528469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.108 [2024-12-02 07:38:55.662524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.108 [2024-12-02 07:38:55.716120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:30.108 [2024-12-02 07:38:55.716468] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.108 [2024-12-02 07:38:55.716594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.108 [2024-12-02 07:38:55.716775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.108 [2024-12-02 07:38:55.717029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.108 [2024-12-02 07:38:55.717151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.108 [2024-12-02 07:38:55.717279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.108 [2024-12-02 07:38:55.717282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.045 07:38:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.045 07:38:56 -- common/autotest_common.sh@862 -- # return 0 00:11:31.045 07:38:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:31.045 07:38:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.045 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.045 07:38:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.045 07:38:56 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:31.045 07:38:56 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:31.045 07:38:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.045 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.045 Malloc0 00:11:31.045 07:38:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.045 07:38:56 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:11:31.045 07:38:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.045 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.045 Delay0 00:11:31.045 07:38:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.045 07:38:56 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.045 07:38:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.045 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.046 [2024-12-02 07:38:56.533019] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.046 07:38:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.046 07:38:56 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.046 07:38:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.046 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.046 07:38:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.046 07:38:56 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.046 07:38:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.046 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.046 07:38:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.046 07:38:56 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.046 07:38:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.046 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.046 [2024-12-02 07:38:56.561198] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.046 07:38:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.046 07:38:56 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.305 07:38:56 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.305 07:38:56 -- common/autotest_common.sh@1187 -- # local i=0 00:11:31.305 07:38:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.305 07:38:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:31.305 07:38:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:33.210 07:38:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:33.210 07:38:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:33.210 07:38:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.210 07:38:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:33.210 07:38:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.210 07:38:58 -- common/autotest_common.sh@1197 -- # return 0 00:11:33.210 07:38:58 -- target/initiator_timeout.sh@35 -- # fio_pid=67432 00:11:33.210 07:38:58 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:11:33.210 07:38:58 -- target/initiator_timeout.sh@37 -- # sleep 3 00:11:33.210 [global] 00:11:33.210 thread=1 00:11:33.210 invalidate=1 00:11:33.210 rw=write 00:11:33.210 time_based=1 00:11:33.210 runtime=60 00:11:33.210 ioengine=libaio 00:11:33.210 direct=1 00:11:33.210 bs=4096 00:11:33.210 iodepth=1 00:11:33.210 norandommap=0 00:11:33.210 numjobs=1 00:11:33.210 00:11:33.210 verify_dump=1 00:11:33.210 verify_backlog=512 00:11:33.210 verify_state_save=0 00:11:33.210 do_verify=1 00:11:33.210 verify=crc32c-intel 00:11:33.210 [job0] 00:11:33.210 filename=/dev/nvme0n1 00:11:33.210 Could not set queue depth (nvme0n1) 00:11:33.469 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.469 fio-3.35 00:11:33.469 Starting 1 thread 00:11:36.753 07:39:01 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:11:36.753 07:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.753 07:39:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.753 true 00:11:36.753 07:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.753 07:39:01 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:11:36.753 07:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.753 07:39:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.753 true 00:11:36.753 07:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.753 07:39:01 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:11:36.753 07:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.753 07:39:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.753 true 00:11:36.753 07:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.753 07:39:01 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:11:36.753 07:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.753 07:39:01 -- common/autotest_common.sh@10 -- # set +x 00:11:36.753 true 00:11:36.753 07:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.753 07:39:01 -- target/initiator_timeout.sh@45 -- # sleep 3 00:11:39.287 07:39:04 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:11:39.287 07:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.287 07:39:04 -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 true 00:11:39.287 07:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.287 07:39:04 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:11:39.287 07:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.287 07:39:04 -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 true 00:11:39.287 07:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.287 07:39:04 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:11:39.287 07:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.287 07:39:04 -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 true 00:11:39.287 07:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.287 07:39:04 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:11:39.287 07:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.287 07:39:04 -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 true 00:11:39.287 07:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.287 07:39:04 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:11:39.287 07:39:04 -- target/initiator_timeout.sh@54 -- # wait 67432 00:12:35.520 00:12:35.520 job0: (groupid=0, jobs=1): err= 0: pid=67453: Mon Dec 2 07:39:59 2024 00:12:35.520 read: IOPS=841, BW=3365KiB/s (3446kB/s)(197MiB/60000msec) 00:12:35.520 slat (usec): min=9, max=11312, avg=12.92, stdev=61.17 00:12:35.520 clat (usec): min=144, max=40659k, avg=1000.19, stdev=180981.65 00:12:35.520 lat (usec): min=159, max=40659k, avg=1013.11, stdev=180981.66 00:12:35.520 clat percentiles (usec): 00:12:35.520 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:12:35.520 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:12:35.520 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:12:35.520 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 293], 99.95th=[ 404], 00:12:35.520 | 99.99th=[ 758] 00:12:35.520 write: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec); 0 zone resets 00:12:35.520 slat (usec): min=12, max=584, avg=19.10, stdev= 6.06 00:12:35.520 clat (usec): min=70, max=3726, avg=152.82, stdev=39.63 00:12:35.520 lat (usec): min=130, max=3821, avg=171.92, stdev=40.65 00:12:35.520 clat percentiles (usec): 00:12:35.520 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 137], 00:12:35.520 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 155], 00:12:35.520 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 188], 00:12:35.520 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 383], 99.95th=[ 611], 00:12:35.520 | 99.99th=[ 1811] 00:12:35.520 bw ( KiB/s): min= 1072, max=12288, per=100.00%, avg=10187.49, stdev=2096.34, samples=39 00:12:35.520 iops : min= 268, max= 3072, avg=2546.87, stdev=524.08, samples=39 00:12:35.520 lat (usec) : 100=0.01%, 250=99.36%, 500=0.59%, 750=0.03%, 1000=0.01% 00:12:35.520 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:12:35.520 cpu : usr=0.56%, sys=2.11%, ctx=101194, majf=0, minf=5 00:12:35.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.520 issued rwts: total=50472,50688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:35.520 00:12:35.520 Run status group 0 (all jobs): 00:12:35.520 READ: bw=3365KiB/s (3446kB/s), 3365KiB/s-3365KiB/s (3446kB/s-3446kB/s), io=197MiB (207MB), run=60000-60000msec 00:12:35.520 WRITE: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:12:35.520 00:12:35.520 Disk stats (read/write): 00:12:35.520 nvme0n1: ios=50497/50447, merge=0/0, ticks=10302/8215, in_queue=18517, util=99.88% 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.520 07:39:59 -- common/autotest_common.sh@1208 -- # local i=0 00:12:35.520 07:39:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:35.520 07:39:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.520 07:39:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:35.520 07:39:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.520 nvmf hotplug test: fio successful as expected 00:12:35.520 07:39:59 -- common/autotest_common.sh@1220 -- # return 0 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.520 07:39:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.520 07:39:59 -- common/autotest_common.sh@10 -- # set +x 00:12:35.520 07:39:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:12:35.520 07:39:59 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:12:35.520 07:39:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:35.520 07:39:59 -- nvmf/common.sh@116 -- # sync 00:12:35.520 07:39:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:35.520 07:39:59 -- nvmf/common.sh@119 -- # set +e 00:12:35.520 07:39:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:35.520 07:39:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:35.520 rmmod nvme_tcp 00:12:35.520 rmmod nvme_fabrics 00:12:35.520 rmmod nvme_keyring 00:12:35.520 07:39:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:35.520 07:39:59 -- nvmf/common.sh@123 -- # set -e 00:12:35.520 07:39:59 -- nvmf/common.sh@124 -- # return 0 00:12:35.520 07:39:59 -- nvmf/common.sh@477 -- # '[' -n 67368 ']' 00:12:35.520 07:39:59 -- nvmf/common.sh@478 -- # killprocess 67368 00:12:35.520 07:39:59 -- common/autotest_common.sh@936 -- # '[' -z 67368 ']' 00:12:35.520 07:39:59 -- common/autotest_common.sh@940 -- # kill -0 67368 00:12:35.520 07:39:59 -- common/autotest_common.sh@941 -- # uname 00:12:35.520 07:39:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:35.520 07:39:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67368 00:12:35.520 07:39:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:35.520 07:39:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:35.520 killing process with pid 67368 00:12:35.520 07:39:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67368' 00:12:35.520 07:39:59 -- common/autotest_common.sh@955 -- # kill 67368 00:12:35.520 07:39:59 -- common/autotest_common.sh@960 -- # wait 67368 00:12:35.520 07:39:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:35.520 07:39:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:35.520 07:39:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:35.520 07:39:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.520 07:39:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:35.520 07:39:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.520 07:39:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.520 07:39:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.520 07:39:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:35.520 00:12:35.520 real 1m4.491s 00:12:35.520 user 3m54.048s 00:12:35.520 sys 0m20.828s 00:12:35.520 07:39:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:35.520 ************************************ 00:12:35.520 07:39:59 -- common/autotest_common.sh@10 -- # set +x 00:12:35.520 END TEST nvmf_initiator_timeout 00:12:35.520 ************************************ 00:12:35.520 07:39:59 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:12:35.520 07:39:59 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:12:35.520 07:39:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.520 07:39:59 -- common/autotest_common.sh@10 -- # set +x 00:12:35.520 07:39:59 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:12:35.520 07:39:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.520 07:39:59 -- common/autotest_common.sh@10 -- # set +x 00:12:35.520 07:39:59 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:12:35.520 07:39:59 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:12:35.521 07:39:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.521 07:39:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.521 07:39:59 -- common/autotest_common.sh@10 -- # set +x 00:12:35.521 ************************************ 00:12:35.521 START TEST nvmf_identify 00:12:35.521 ************************************ 00:12:35.521 07:39:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:12:35.521 * Looking for test storage... 00:12:35.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:35.521 07:39:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:35.521 07:39:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:35.521 07:39:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:35.521 07:39:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:35.521 07:39:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:35.521 07:39:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:35.521 07:39:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:35.521 07:39:59 -- scripts/common.sh@335 -- # IFS=.-: 00:12:35.521 07:39:59 -- scripts/common.sh@335 -- # read -ra ver1 00:12:35.521 07:39:59 -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.521 07:39:59 -- scripts/common.sh@336 -- # read -ra ver2 00:12:35.521 07:39:59 -- scripts/common.sh@337 -- # local 'op=<' 00:12:35.521 07:39:59 -- scripts/common.sh@339 -- # ver1_l=2 00:12:35.521 07:39:59 -- scripts/common.sh@340 -- # ver2_l=1 00:12:35.521 07:39:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:35.521 07:39:59 -- scripts/common.sh@343 -- # case "$op" in 00:12:35.521 07:39:59 -- scripts/common.sh@344 -- # : 1 00:12:35.521 07:39:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:35.521 07:39:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.521 07:39:59 -- scripts/common.sh@364 -- # decimal 1 00:12:35.521 07:39:59 -- scripts/common.sh@352 -- # local d=1 00:12:35.521 07:39:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.521 07:39:59 -- scripts/common.sh@354 -- # echo 1 00:12:35.521 07:39:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:35.521 07:39:59 -- scripts/common.sh@365 -- # decimal 2 00:12:35.521 07:39:59 -- scripts/common.sh@352 -- # local d=2 00:12:35.521 07:39:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.521 07:39:59 -- scripts/common.sh@354 -- # echo 2 00:12:35.521 07:39:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:35.521 07:39:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:35.521 07:39:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:35.521 07:39:59 -- scripts/common.sh@367 -- # return 0 00:12:35.521 07:39:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.521 07:39:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:35.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.521 --rc genhtml_branch_coverage=1 00:12:35.521 --rc genhtml_function_coverage=1 00:12:35.521 --rc genhtml_legend=1 00:12:35.521 --rc geninfo_all_blocks=1 00:12:35.521 --rc geninfo_unexecuted_blocks=1 00:12:35.521 00:12:35.521 ' 00:12:35.521 07:39:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:35.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.521 --rc genhtml_branch_coverage=1 00:12:35.521 --rc genhtml_function_coverage=1 00:12:35.521 --rc genhtml_legend=1 00:12:35.521 --rc geninfo_all_blocks=1 00:12:35.521 --rc geninfo_unexecuted_blocks=1 00:12:35.521 00:12:35.521 ' 00:12:35.521 07:39:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:35.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.521 --rc genhtml_branch_coverage=1 00:12:35.521 --rc genhtml_function_coverage=1 00:12:35.521 --rc genhtml_legend=1 00:12:35.521 --rc geninfo_all_blocks=1 00:12:35.521 --rc geninfo_unexecuted_blocks=1 00:12:35.521 00:12:35.521 ' 00:12:35.521 07:39:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:35.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.521 --rc genhtml_branch_coverage=1 00:12:35.521 --rc genhtml_function_coverage=1 00:12:35.521 --rc genhtml_legend=1 00:12:35.521 --rc geninfo_all_blocks=1 00:12:35.521 --rc geninfo_unexecuted_blocks=1 00:12:35.521 00:12:35.521 ' 00:12:35.521 07:39:59 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:35.521 07:39:59 -- nvmf/common.sh@7 -- # uname -s 00:12:35.521 07:39:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.521 07:39:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.521 07:39:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.521 07:39:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.521 07:39:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.521 07:39:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.521 07:39:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.521 07:39:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.521 07:39:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.521 07:39:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.521 07:39:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:12:35.521 07:39:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:12:35.521 07:39:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.521 07:39:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.521 07:39:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:35.521 07:39:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.521 07:39:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.521 07:39:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.521 07:39:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.521 07:39:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.521 07:39:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.521 07:39:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.521 07:39:59 -- paths/export.sh@5 -- # export PATH 00:12:35.522 07:39:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.522 07:39:59 -- nvmf/common.sh@46 -- # : 0 00:12:35.522 07:39:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:35.522 07:39:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:35.522 07:39:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:35.522 07:39:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.522 07:39:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.522 07:39:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:35.522 07:39:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:35.522 07:39:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:35.522 07:39:59 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:35.522 07:39:59 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:35.522 07:39:59 -- host/identify.sh@14 -- # nvmftestinit 00:12:35.522 07:39:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:35.522 07:39:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.522 07:39:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:35.522 07:39:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:35.522 07:39:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:35.522 07:39:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.522 07:39:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.522 07:39:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.522 07:39:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:35.522 07:39:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:35.522 07:39:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:35.522 07:39:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:35.522 07:39:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:35.522 07:39:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:35.522 07:39:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.522 07:39:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.522 07:39:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.522 07:39:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:35.522 07:39:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.522 07:39:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.522 07:39:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.522 07:39:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.522 07:39:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.522 07:39:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.522 07:39:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.522 07:39:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.522 07:39:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:35.522 07:39:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:35.522 Cannot find device "nvmf_tgt_br" 00:12:35.522 07:39:59 -- nvmf/common.sh@154 -- # true 00:12:35.522 07:39:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.522 Cannot find device "nvmf_tgt_br2" 00:12:35.522 07:39:59 -- nvmf/common.sh@155 -- # true 00:12:35.522 07:39:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:35.522 07:39:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:35.522 Cannot find device "nvmf_tgt_br" 00:12:35.522 07:39:59 -- nvmf/common.sh@157 -- # true 00:12:35.522 07:39:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:35.522 Cannot find device "nvmf_tgt_br2" 00:12:35.522 07:39:59 -- nvmf/common.sh@158 -- # true 00:12:35.522 07:39:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:35.522 07:39:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:35.522 07:39:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.522 07:39:59 -- nvmf/common.sh@161 -- # true 00:12:35.522 07:39:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.522 07:39:59 -- nvmf/common.sh@162 -- # true 00:12:35.522 07:39:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.522 07:39:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.522 07:39:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.522 07:39:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.522 07:39:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.522 07:39:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.522 07:39:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.522 07:39:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.522 07:39:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.522 07:39:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:35.522 07:39:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:35.522 07:39:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:35.522 07:39:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:35.522 07:39:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.522 07:39:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.522 07:39:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.522 07:39:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:35.522 07:39:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:35.522 07:40:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.522 07:40:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.522 07:40:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.522 07:40:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.522 07:40:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.522 07:40:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:35.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:35.522 00:12:35.522 --- 10.0.0.2 ping statistics --- 00:12:35.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.522 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:35.522 07:40:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:35.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:35.522 00:12:35.522 --- 10.0.0.3 ping statistics --- 00:12:35.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.522 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:35.522 07:40:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:35.522 00:12:35.522 --- 10.0.0.1 ping statistics --- 00:12:35.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.522 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:35.522 07:40:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.522 07:40:00 -- nvmf/common.sh@421 -- # return 0 00:12:35.522 07:40:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:35.522 07:40:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.522 07:40:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.523 07:40:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.523 07:40:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.523 07:40:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.523 07:40:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.523 07:40:00 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:12:35.523 07:40:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.523 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:12:35.523 07:40:00 -- host/identify.sh@19 -- # nvmfpid=68303 00:12:35.523 07:40:00 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.523 07:40:00 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:35.523 07:40:00 -- host/identify.sh@23 -- # waitforlisten 68303 00:12:35.523 07:40:00 -- common/autotest_common.sh@829 -- # '[' -z 68303 ']' 00:12:35.523 07:40:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.523 07:40:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.523 07:40:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.523 07:40:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.523 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:12:35.523 [2024-12-02 07:40:00.152487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:35.523 [2024-12-02 07:40:00.152572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.523 [2024-12-02 07:40:00.292254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.523 [2024-12-02 07:40:00.359875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:35.523 [2024-12-02 07:40:00.360033] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.523 [2024-12-02 07:40:00.360048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.523 [2024-12-02 07:40:00.360058] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.523 [2024-12-02 07:40:00.360216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.523 [2024-12-02 07:40:00.360958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.523 [2024-12-02 07:40:00.361105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.523 [2024-12-02 07:40:00.361199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.523 07:40:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.523 07:40:01 -- common/autotest_common.sh@862 -- # return 0 00:12:35.523 07:40:01 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.523 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.523 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.523 [2024-12-02 07:40:01.123495] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.523 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.523 07:40:01 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:12:35.523 07:40:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.523 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.782 07:40:01 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:35.782 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.782 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.782 Malloc0 00:12:35.782 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.782 07:40:01 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:35.782 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.782 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.782 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.782 07:40:01 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:12:35.782 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.782 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.782 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.782 07:40:01 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.782 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.782 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.782 [2024-12-02 07:40:01.220467] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.782 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.782 07:40:01 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:35.782 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.782 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.782 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.782 07:40:01 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:12:35.782 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.782 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:35.782 [2024-12-02 07:40:01.236229] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:35.782 [ 00:12:35.782 { 00:12:35.782 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:35.782 "subtype": "Discovery", 00:12:35.782 "listen_addresses": [ 00:12:35.782 { 00:12:35.782 "transport": "TCP", 00:12:35.782 "trtype": "TCP", 00:12:35.782 "adrfam": "IPv4", 00:12:35.782 "traddr": "10.0.0.2", 00:12:35.782 "trsvcid": "4420" 00:12:35.782 } 00:12:35.782 ], 00:12:35.782 "allow_any_host": true, 00:12:35.782 "hosts": [] 00:12:35.782 }, 00:12:35.782 { 00:12:35.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.782 "subtype": "NVMe", 00:12:35.782 "listen_addresses": [ 00:12:35.782 { 00:12:35.782 "transport": "TCP", 00:12:35.782 "trtype": "TCP", 00:12:35.782 "adrfam": "IPv4", 00:12:35.782 "traddr": "10.0.0.2", 00:12:35.782 "trsvcid": "4420" 00:12:35.782 } 00:12:35.782 ], 00:12:35.782 "allow_any_host": true, 00:12:35.782 "hosts": [], 00:12:35.782 "serial_number": "SPDK00000000000001", 00:12:35.782 "model_number": "SPDK bdev Controller", 00:12:35.782 "max_namespaces": 32, 00:12:35.782 "min_cntlid": 1, 00:12:35.782 "max_cntlid": 65519, 00:12:35.782 "namespaces": [ 00:12:35.782 { 00:12:35.782 "nsid": 1, 00:12:35.782 "bdev_name": "Malloc0", 00:12:35.782 "name": "Malloc0", 00:12:35.782 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:12:35.782 "eui64": "ABCDEF0123456789", 00:12:35.782 "uuid": "95c18b1a-5062-4a84-9b03-7a494f744d2d" 00:12:35.782 } 00:12:35.782 ] 00:12:35.782 } 00:12:35.782 ] 00:12:35.782 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.782 07:40:01 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:12:35.782 [2024-12-02 07:40:01.274341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:35.782 [2024-12-02 07:40:01.274397] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68338 ] 00:12:36.047 [2024-12-02 07:40:01.409812] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:12:36.047 [2024-12-02 07:40:01.409880] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:12:36.047 [2024-12-02 07:40:01.409899] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:12:36.047 [2024-12-02 07:40:01.409907] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:12:36.047 [2024-12-02 07:40:01.409916] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:12:36.047 [2024-12-02 07:40:01.410019] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:12:36.047 [2024-12-02 07:40:01.410109] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdfbd30 0 00:12:36.047 [2024-12-02 07:40:01.419358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:12:36.047 [2024-12-02 07:40:01.419379] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:12:36.047 [2024-12-02 07:40:01.419401] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:12:36.047 [2024-12-02 07:40:01.419404] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:12:36.047 [2024-12-02 07:40:01.419443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.419449] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.419453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.047 [2024-12-02 07:40:01.419465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:36.047 [2024-12-02 07:40:01.419492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.047 [2024-12-02 07:40:01.427341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.047 [2024-12-02 07:40:01.427360] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.047 [2024-12-02 07:40:01.427381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.427386] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.047 [2024-12-02 07:40:01.427396] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:12:36.047 [2024-12-02 07:40:01.427403] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:12:36.047 [2024-12-02 07:40:01.427409] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:12:36.047 [2024-12-02 07:40:01.427424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.427429] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.427432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.047 [2024-12-02 07:40:01.427441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.047 [2024-12-02 07:40:01.427467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.047 [2024-12-02 07:40:01.427520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.047 [2024-12-02 07:40:01.427528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.047 [2024-12-02 07:40:01.427531] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.427535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.047 [2024-12-02 07:40:01.427541] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:12:36.047 [2024-12-02 07:40:01.427548] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:12:36.047 [2024-12-02 07:40:01.427555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.427559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.427578] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.047 [2024-12-02 07:40:01.427586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.047 [2024-12-02 07:40:01.427604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.047 [2024-12-02 07:40:01.427654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.047 [2024-12-02 07:40:01.427661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.047 [2024-12-02 07:40:01.427665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.047 [2024-12-02 07:40:01.427669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.047 [2024-12-02 07:40:01.427674] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:12:36.048 [2024-12-02 07:40:01.427682] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:12:36.048 [2024-12-02 07:40:01.427689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.427693] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.427697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.427704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.048 [2024-12-02 07:40:01.427721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.048 [2024-12-02 07:40:01.427763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.048 [2024-12-02 07:40:01.427774] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.048 [2024-12-02 07:40:01.427778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.427782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.048 [2024-12-02 07:40:01.427788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:36.048 [2024-12-02 07:40:01.427798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.427803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.427807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.427814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.048 [2024-12-02 07:40:01.427831] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.048 [2024-12-02 07:40:01.427873] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.048 [2024-12-02 07:40:01.427880] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.048 [2024-12-02 07:40:01.427884] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.427888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.048 [2024-12-02 07:40:01.427892] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:12:36.048 [2024-12-02 07:40:01.427897] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:12:36.048 [2024-12-02 07:40:01.427914] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:36.048 [2024-12-02 07:40:01.428019] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:12:36.048 [2024-12-02 07:40:01.428024] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:36.048 [2024-12-02 07:40:01.428032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.048 [2024-12-02 07:40:01.428065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.048 [2024-12-02 07:40:01.428120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.048 [2024-12-02 07:40:01.428127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.048 [2024-12-02 07:40:01.428130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.048 [2024-12-02 07:40:01.428139] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:36.048 [2024-12-02 07:40:01.428149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428153] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.048 [2024-12-02 07:40:01.428181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.048 [2024-12-02 07:40:01.428230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.048 [2024-12-02 07:40:01.428237] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.048 [2024-12-02 07:40:01.428240] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428244] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.048 [2024-12-02 07:40:01.428249] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:36.048 [2024-12-02 07:40:01.428253] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:12:36.048 [2024-12-02 07:40:01.428261] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:12:36.048 [2024-12-02 07:40:01.428276] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:12:36.048 [2024-12-02 07:40:01.428286] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.048 [2024-12-02 07:40:01.428334] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.048 [2024-12-02 07:40:01.428422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.048 [2024-12-02 07:40:01.428430] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.048 [2024-12-02 07:40:01.428434] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428438] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdfbd30): datao=0, datal=4096, cccid=0 00:12:36.048 [2024-12-02 07:40:01.428442] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe59f30) on tqpair(0xdfbd30): expected_datao=0, payload_size=4096 00:12:36.048 [2024-12-02 07:40:01.428451] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428456] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.048 [2024-12-02 07:40:01.428471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.048 [2024-12-02 07:40:01.428474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428478] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.048 [2024-12-02 07:40:01.428486] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:12:36.048 [2024-12-02 07:40:01.428492] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:12:36.048 [2024-12-02 07:40:01.428496] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:12:36.048 [2024-12-02 07:40:01.428501] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:12:36.048 [2024-12-02 07:40:01.428506] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:12:36.048 [2024-12-02 07:40:01.428511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:12:36.048 [2024-12-02 07:40:01.428523] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:12:36.048 [2024-12-02 07:40:01.428531] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428539] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.048 [2024-12-02 07:40:01.428566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.048 [2024-12-02 07:40:01.428616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.048 [2024-12-02 07:40:01.428623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.048 [2024-12-02 07:40:01.428626] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428630] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe59f30) on tqpair=0xdfbd30 00:12:36.048 [2024-12-02 07:40:01.428637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428641] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428645] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.048 [2024-12-02 07:40:01.428658] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.048 [2024-12-02 07:40:01.428677] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.048 [2024-12-02 07:40:01.428695] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428699] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.048 [2024-12-02 07:40:01.428703] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.048 [2024-12-02 07:40:01.428708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.048 [2024-12-02 07:40:01.428713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:12:36.049 [2024-12-02 07:40:01.428726] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:36.049 [2024-12-02 07:40:01.428733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.428737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.428740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdfbd30) 00:12:36.049 [2024-12-02 07:40:01.428747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.049 [2024-12-02 07:40:01.428767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe59f30, cid 0, qid 0 00:12:36.049 [2024-12-02 07:40:01.428774] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a090, cid 1, qid 0 00:12:36.049 [2024-12-02 07:40:01.428778] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a1f0, cid 2, qid 0 00:12:36.049 [2024-12-02 07:40:01.428783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.049 [2024-12-02 07:40:01.428788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a4b0, cid 4, qid 0 00:12:36.049 [2024-12-02 07:40:01.428876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.049 [2024-12-02 07:40:01.428883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.049 [2024-12-02 07:40:01.428886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.428890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a4b0) on tqpair=0xdfbd30 00:12:36.049 [2024-12-02 07:40:01.428895] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:12:36.049 [2024-12-02 07:40:01.428900] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:12:36.049 [2024-12-02 07:40:01.428911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.428916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.428920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdfbd30) 00:12:36.049 [2024-12-02 07:40:01.428927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.049 [2024-12-02 07:40:01.428944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a4b0, cid 4, qid 0 00:12:36.049 [2024-12-02 07:40:01.429001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.049 [2024-12-02 07:40:01.429008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.049 [2024-12-02 07:40:01.429012] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429016] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdfbd30): datao=0, datal=4096, cccid=4 00:12:36.049 [2024-12-02 07:40:01.429020] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5a4b0) on tqpair(0xdfbd30): expected_datao=0, payload_size=4096 00:12:36.049 [2024-12-02 07:40:01.429028] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429032] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.049 [2024-12-02 07:40:01.429046] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.049 [2024-12-02 07:40:01.429050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429054] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a4b0) on tqpair=0xdfbd30 00:12:36.049 [2024-12-02 07:40:01.429066] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:12:36.049 [2024-12-02 07:40:01.429089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429095] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429099] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdfbd30) 00:12:36.049 [2024-12-02 07:40:01.429106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.049 [2024-12-02 07:40:01.429113] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429117] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdfbd30) 00:12:36.049 [2024-12-02 07:40:01.429127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.049 [2024-12-02 07:40:01.429150] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a4b0, cid 4, qid 0 00:12:36.049 [2024-12-02 07:40:01.429157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a610, cid 5, qid 0 00:12:36.049 [2024-12-02 07:40:01.429258] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.049 [2024-12-02 07:40:01.429265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.049 [2024-12-02 07:40:01.429268] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429272] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdfbd30): datao=0, datal=1024, cccid=4 00:12:36.049 [2024-12-02 07:40:01.429277] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5a4b0) on tqpair(0xdfbd30): expected_datao=0, payload_size=1024 00:12:36.049 [2024-12-02 07:40:01.429284] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429288] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.049 [2024-12-02 07:40:01.429334] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.049 [2024-12-02 07:40:01.429338] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a610) on tqpair=0xdfbd30 00:12:36.049 [2024-12-02 07:40:01.429361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.049 [2024-12-02 07:40:01.429369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.049 [2024-12-02 07:40:01.429373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a4b0) on tqpair=0xdfbd30 00:12:36.049 [2024-12-02 07:40:01.429393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429402] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdfbd30) 00:12:36.049 [2024-12-02 07:40:01.429409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.049 [2024-12-02 07:40:01.429435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a4b0, cid 4, qid 0 00:12:36.049 [2024-12-02 07:40:01.429503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.049 [2024-12-02 07:40:01.429510] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.049 [2024-12-02 07:40:01.429514] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429518] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdfbd30): datao=0, datal=3072, cccid=4 00:12:36.049 [2024-12-02 07:40:01.429522] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5a4b0) on tqpair(0xdfbd30): expected_datao=0, payload_size=3072 00:12:36.049 [2024-12-02 07:40:01.429530] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429534] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.049 [2024-12-02 07:40:01.429549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.049 [2024-12-02 07:40:01.429552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a4b0) on tqpair=0xdfbd30 00:12:36.049 [2024-12-02 07:40:01.429566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdfbd30) 00:12:36.049 [2024-12-02 07:40:01.429581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.049 [2024-12-02 07:40:01.429604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a4b0, cid 4, qid 0 00:12:36.049 [2024-12-02 07:40:01.429666] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.049 [2024-12-02 07:40:01.429673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.049 [2024-12-02 07:40:01.429676] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.049 [2024-12-02 07:40:01.429680] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdfbd30): datao=0, datal=8, cccid=4 00:12:36.049 [2024-12-02 07:40:01.429685] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5a4b0) on tqpair(0xdfbd30): expected_datao=0, payload_size=8 00:12:36.049 [2024-12-02 07:40:01.429707] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.049 ===================================================== 00:12:36.049 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:12:36.049 ===================================================== 00:12:36.049 Controller Capabilities/Features 00:12:36.049 ================================ 00:12:36.049 Vendor ID: 0000 00:12:36.049 Subsystem Vendor ID: 0000 00:12:36.049 Serial Number: .................... 00:12:36.049 Model Number: ........................................ 00:12:36.049 Firmware Version: 24.01.1 00:12:36.049 Recommended Arb Burst: 0 00:12:36.049 IEEE OUI Identifier: 00 00 00 00:12:36.049 Multi-path I/O 00:12:36.049 May have multiple subsystem ports: No 00:12:36.049 May have multiple controllers: No 00:12:36.049 Associated with SR-IOV VF: No 00:12:36.049 Max Data Transfer Size: 131072 00:12:36.049 Max Number of Namespaces: 0 00:12:36.049 Max Number of I/O Queues: 1024 00:12:36.049 NVMe Specification Version (VS): 1.3 00:12:36.049 NVMe Specification Version (Identify): 1.3 00:12:36.049 Maximum Queue Entries: 128 00:12:36.049 Contiguous Queues Required: Yes 00:12:36.049 Arbitration Mechanisms Supported 00:12:36.049 Weighted Round Robin: Not Supported 00:12:36.049 Vendor Specific: Not Supported 00:12:36.049 Reset Timeout: 15000 ms 00:12:36.049 Doorbell Stride: 4 bytes 00:12:36.049 NVM Subsystem Reset: Not Supported 00:12:36.049 Command Sets Supported 00:12:36.049 NVM Command Set: Supported 00:12:36.050 Boot Partition: Not Supported 00:12:36.050 Memory Page Size Minimum: 4096 bytes 00:12:36.050 Memory Page Size Maximum: 4096 bytes 00:12:36.050 Persistent Memory Region: Not Supported 00:12:36.050 Optional Asynchronous Events Supported 00:12:36.050 Namespace Attribute Notices: Not Supported 00:12:36.050 Firmware Activation Notices: Not Supported 00:12:36.050 ANA Change Notices: Not Supported 00:12:36.050 PLE Aggregate Log Change Notices: Not Supported 00:12:36.050 LBA Status Info Alert Notices: Not Supported 00:12:36.050 EGE Aggregate Log Change Notices: Not Supported 00:12:36.050 Normal NVM Subsystem Shutdown event: Not Supported 00:12:36.050 Zone Descriptor Change Notices: Not Supported 00:12:36.050 Discovery Log Change Notices: Supported 00:12:36.050 Controller Attributes 00:12:36.050 128-bit Host Identifier: Not Supported 00:12:36.050 Non-Operational Permissive Mode: Not Supported 00:12:36.050 NVM Sets: Not Supported 00:12:36.050 Read Recovery Levels: Not Supported 00:12:36.050 Endurance Groups: Not Supported 00:12:36.050 Predictable Latency Mode: Not Supported 00:12:36.050 Traffic Based Keep ALive: Not Supported 00:12:36.050 Namespace Granularity: Not Supported 00:12:36.050 SQ Associations: Not Supported 00:12:36.050 UUID List: Not Supported 00:12:36.050 Multi-Domain Subsystem: Not Supported 00:12:36.050 Fixed Capacity Management: Not Supported 00:12:36.050 Variable Capacity Management: Not Supported 00:12:36.050 Delete Endurance Group: Not Supported 00:12:36.050 Delete NVM Set: Not Supported 00:12:36.050 Extended LBA Formats Supported: Not Supported 00:12:36.050 Flexible Data Placement Supported: Not Supported 00:12:36.050 00:12:36.050 Controller Memory Buffer Support 00:12:36.050 ================================ 00:12:36.050 Supported: No 00:12:36.050 00:12:36.050 Persistent Memory Region Support 00:12:36.050 ================================ 00:12:36.050 Supported: No 00:12:36.050 00:12:36.050 Admin Command Set Attributes 00:12:36.050 ============================ 00:12:36.050 Security Send/Receive: Not Supported 00:12:36.050 Format NVM: Not Supported 00:12:36.050 Firmware Activate/Download: Not Supported 00:12:36.050 Namespace Management: Not Supported 00:12:36.050 Device Self-Test: Not Supported 00:12:36.050 Directives: Not Supported 00:12:36.050 NVMe-MI: Not Supported 00:12:36.050 Virtualization Management: Not Supported 00:12:36.050 Doorbell Buffer Config: Not Supported 00:12:36.050 Get LBA Status Capability: Not Supported 00:12:36.050 Command & Feature Lockdown Capability: Not Supported 00:12:36.050 Abort Command Limit: 1 00:12:36.050 Async Event Request Limit: 4 00:12:36.050 Number of Firmware Slots: N/A 00:12:36.050 Firmware Slot 1 Read-Only: N/A 00:12:36.050 Firmware Activation Without Reset: N/A 00:12:36.050 Multiple Update Detection Support: N/A 00:12:36.050 Firmware Update Granularity: No Information Provided 00:12:36.050 Per-Namespace SMART Log: No 00:12:36.050 Asymmetric Namespace Access Log Page: Not Supported 00:12:36.050 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:12:36.050 Command Effects Log Page: Not Supported 00:12:36.050 Get Log Page Extended Data: Supported 00:12:36.050 Telemetry Log Pages: Not Supported 00:12:36.050 Persistent Event Log Pages: Not Supported 00:12:36.050 Supported Log Pages Log Page: May Support 00:12:36.050 Commands Supported & Effects Log Page: Not Supported 00:12:36.050 Feature Identifiers & Effects Log Page:May Support 00:12:36.050 NVMe-MI Commands & Effects Log Page: May Support 00:12:36.050 Data Area 4 for Telemetry Log: Not Supported 00:12:36.050 Error Log Page Entries Supported: 128 00:12:36.050 Keep Alive: Not Supported 00:12:36.050 00:12:36.050 NVM Command Set Attributes 00:12:36.050 ========================== 00:12:36.050 Submission Queue Entry Size 00:12:36.050 Max: 1 00:12:36.050 Min: 1 00:12:36.050 Completion Queue Entry Size 00:12:36.050 Max: 1 00:12:36.050 Min: 1 00:12:36.050 Number of Namespaces: 0 00:12:36.050 Compare Command: Not Supported 00:12:36.050 Write Uncorrectable Command: Not Supported 00:12:36.050 Dataset Management Command: Not Supported 00:12:36.050 Write Zeroes Command: Not Supported 00:12:36.050 Set Features Save Field: Not Supported 00:12:36.050 Reservations: Not Supported 00:12:36.050 Timestamp: Not Supported 00:12:36.050 Copy: Not Supported 00:12:36.050 Volatile Write Cache: Not Present 00:12:36.050 Atomic Write Unit (Normal): 1 00:12:36.050 Atomic Write Unit (PFail): 1 00:12:36.050 Atomic Compare & Write Unit: 1 00:12:36.050 Fused Compare & Write: Supported 00:12:36.050 Scatter-Gather List 00:12:36.050 SGL Command Set: Supported 00:12:36.050 SGL Keyed: Supported 00:12:36.050 SGL Bit Bucket Descriptor: Not Supported 00:12:36.050 SGL Metadata Pointer: Not Supported 00:12:36.050 Oversized SGL: Not Supported 00:12:36.050 SGL Metadata Address: Not Supported 00:12:36.050 SGL Offset: Supported 00:12:36.050 Transport SGL Data Block: Not Supported 00:12:36.050 Replay Protected Memory Block: Not Supported 00:12:36.050 00:12:36.050 Firmware Slot Information 00:12:36.050 ========================= 00:12:36.050 Active slot: 0 00:12:36.050 00:12:36.050 00:12:36.050 Error Log 00:12:36.050 ========= 00:12:36.050 00:12:36.050 Active Namespaces 00:12:36.050 ================= 00:12:36.050 Discovery Log Page 00:12:36.050 ================== 00:12:36.050 Generation Counter: 2 00:12:36.050 Number of Records: 2 00:12:36.050 Record Format: 0 00:12:36.050 00:12:36.050 Discovery Log Entry 0 00:12:36.050 ---------------------- 00:12:36.050 Transport Type: 3 (TCP) 00:12:36.050 Address Family: 1 (IPv4) 00:12:36.050 Subsystem Type: 3 (Current Discovery Subsystem) 00:12:36.050 Entry Flags: 00:12:36.050 Duplicate Returned Information: 1 00:12:36.050 Explicit Persistent Connection Support for Discovery: 1 00:12:36.050 Transport Requirements: 00:12:36.050 Secure Channel: Not Required 00:12:36.050 Port ID: 0 (0x0000) 00:12:36.050 Controller ID: 65535 (0xffff) 00:12:36.050 Admin Max SQ Size: 128 00:12:36.050 Transport Service Identifier: 4420 00:12:36.050 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:12:36.050 Transport Address: 10.0.0.2 00:12:36.050 Discovery Log Entry 1 00:12:36.050 ---------------------- 00:12:36.050 Transport Type: 3 (TCP) 00:12:36.050 Address Family: 1 (IPv4) 00:12:36.050 Subsystem Type: 2 (NVM Subsystem) 00:12:36.050 Entry Flags: 00:12:36.050 Duplicate Returned Information: 0 00:12:36.050 Explicit Persistent Connection Support for Discovery: 0 00:12:36.050 Transport Requirements: 00:12:36.050 Secure Channel: Not Required 00:12:36.050 Port ID: 0 (0x0000) 00:12:36.050 Controller ID: 65535 (0xffff) 00:12:36.050 Admin Max SQ Size: 128 00:12:36.050 Transport Service Identifier: 4420 00:12:36.050 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:12:36.050 Transport Address: 10.0.0.2 [2024-12-02 07:40:01.429711] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.050 [2024-12-02 07:40:01.429725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.050 [2024-12-02 07:40:01.429733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.050 [2024-12-02 07:40:01.429736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.050 [2024-12-02 07:40:01.429740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a4b0) on tqpair=0xdfbd30 00:12:36.050 [2024-12-02 07:40:01.429826] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:12:36.050 [2024-12-02 07:40:01.429842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.050 [2024-12-02 07:40:01.429849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.050 [2024-12-02 07:40:01.429855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.050 [2024-12-02 07:40:01.429861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.050 [2024-12-02 07:40:01.429870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.050 [2024-12-02 07:40:01.429874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.050 [2024-12-02 07:40:01.429878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.050 [2024-12-02 07:40:01.429885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.050 [2024-12-02 07:40:01.429906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.050 [2024-12-02 07:40:01.429953] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.050 [2024-12-02 07:40:01.429960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.050 [2024-12-02 07:40:01.429964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.429968] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.429975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.429979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.429983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.429990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430104] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430125] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:12:36.051 [2024-12-02 07:40:01.430130] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:12:36.051 [2024-12-02 07:40:01.430141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430176] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430238] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430258] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430380] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430414] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430418] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430444] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430524] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430651] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430708] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430712] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430722] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430800] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430829] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430856] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.430906] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.430913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.430917] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.430931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.430939] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.430946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.430963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.431008] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.431015] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.431018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.431032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.431047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.431063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.431113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.431120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.431123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431127] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.431137] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431142] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431145] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.051 [2024-12-02 07:40:01.431152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.051 [2024-12-02 07:40:01.431168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.051 [2024-12-02 07:40:01.431215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.051 [2024-12-02 07:40:01.431222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.051 [2024-12-02 07:40:01.431226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.051 [2024-12-02 07:40:01.431240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.051 [2024-12-02 07:40:01.431248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.052 [2024-12-02 07:40:01.431255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.052 [2024-12-02 07:40:01.431271] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.052 [2024-12-02 07:40:01.431316] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.052 [2024-12-02 07:40:01.435348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.052 [2024-12-02 07:40:01.435362] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.435368] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.052 [2024-12-02 07:40:01.435399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.435404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.435408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdfbd30) 00:12:36.052 [2024-12-02 07:40:01.435416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.052 [2024-12-02 07:40:01.435451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a350, cid 3, qid 0 00:12:36.052 [2024-12-02 07:40:01.435508] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.052 [2024-12-02 07:40:01.435515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.052 [2024-12-02 07:40:01.435518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.435522] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe5a350) on tqpair=0xdfbd30 00:12:36.052 [2024-12-02 07:40:01.435530] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:12:36.052 00:12:36.052 07:40:01 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:12:36.052 [2024-12-02 07:40:01.472091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:36.052 [2024-12-02 07:40:01.472137] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68340 ] 00:12:36.052 [2024-12-02 07:40:01.610698] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:12:36.052 [2024-12-02 07:40:01.610779] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:12:36.052 [2024-12-02 07:40:01.610785] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:12:36.052 [2024-12-02 07:40:01.610795] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:12:36.052 [2024-12-02 07:40:01.610804] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:12:36.052 [2024-12-02 07:40:01.610896] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:12:36.052 [2024-12-02 07:40:01.610941] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x222bd30 0 00:12:36.052 [2024-12-02 07:40:01.616350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:12:36.052 [2024-12-02 07:40:01.616370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:12:36.052 [2024-12-02 07:40:01.616391] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:12:36.052 [2024-12-02 07:40:01.616395] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:12:36.052 [2024-12-02 07:40:01.616434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.616440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.616444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.052 [2024-12-02 07:40:01.616455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:36.052 [2024-12-02 07:40:01.616485] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.052 [2024-12-02 07:40:01.623360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.052 [2024-12-02 07:40:01.623378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.052 [2024-12-02 07:40:01.623399] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.623403] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.052 [2024-12-02 07:40:01.623416] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:12:36.052 [2024-12-02 07:40:01.623423] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:12:36.052 [2024-12-02 07:40:01.623429] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:12:36.052 [2024-12-02 07:40:01.623442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.623447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.623451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.052 [2024-12-02 07:40:01.623460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.052 [2024-12-02 07:40:01.623483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.052 [2024-12-02 07:40:01.623542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.052 [2024-12-02 07:40:01.623549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.052 [2024-12-02 07:40:01.623553] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.623557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.052 [2024-12-02 07:40:01.623563] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:12:36.052 [2024-12-02 07:40:01.623570] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:12:36.052 [2024-12-02 07:40:01.623578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.623582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.623586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.052 [2024-12-02 07:40:01.623593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.052 [2024-12-02 07:40:01.623610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.052 [2024-12-02 07:40:01.624053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.052 [2024-12-02 07:40:01.624067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.052 [2024-12-02 07:40:01.624072] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.624076] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.052 [2024-12-02 07:40:01.624083] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:12:36.052 [2024-12-02 07:40:01.624092] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:12:36.052 [2024-12-02 07:40:01.624100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.624104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.624108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.052 [2024-12-02 07:40:01.624115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.052 [2024-12-02 07:40:01.624133] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.052 [2024-12-02 07:40:01.624187] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.052 [2024-12-02 07:40:01.624194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.052 [2024-12-02 07:40:01.624198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.624202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.052 [2024-12-02 07:40:01.624208] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:36.052 [2024-12-02 07:40:01.624219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.624223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.052 [2024-12-02 07:40:01.624227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.052 [2024-12-02 07:40:01.624234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.052 [2024-12-02 07:40:01.624250] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.052 [2024-12-02 07:40:01.624771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.052 [2024-12-02 07:40:01.624786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.052 [2024-12-02 07:40:01.624790] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.624794] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.053 [2024-12-02 07:40:01.624800] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:12:36.053 [2024-12-02 07:40:01.624805] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:12:36.053 [2024-12-02 07:40:01.624813] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:36.053 [2024-12-02 07:40:01.624919] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:12:36.053 [2024-12-02 07:40:01.624924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:36.053 [2024-12-02 07:40:01.624947] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.624952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.624956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.624963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.053 [2024-12-02 07:40:01.624983] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.053 [2024-12-02 07:40:01.625098] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.053 [2024-12-02 07:40:01.625105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.053 [2024-12-02 07:40:01.625108] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.053 [2024-12-02 07:40:01.625118] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:36.053 [2024-12-02 07:40:01.625128] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625133] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.625144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.053 [2024-12-02 07:40:01.625160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.053 [2024-12-02 07:40:01.625601] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.053 [2024-12-02 07:40:01.625617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.053 [2024-12-02 07:40:01.625622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.053 [2024-12-02 07:40:01.625633] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:36.053 [2024-12-02 07:40:01.625639] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.625648] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:12:36.053 [2024-12-02 07:40:01.625664] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.625675] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625694] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.625721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.053 [2024-12-02 07:40:01.625757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.053 [2024-12-02 07:40:01.625966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.053 [2024-12-02 07:40:01.625973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.053 [2024-12-02 07:40:01.625977] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625981] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=4096, cccid=0 00:12:36.053 [2024-12-02 07:40:01.625986] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2289f30) on tqpair(0x222bd30): expected_datao=0, payload_size=4096 00:12:36.053 [2024-12-02 07:40:01.625994] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.625999] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626368] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.053 [2024-12-02 07:40:01.626384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.053 [2024-12-02 07:40:01.626389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.053 [2024-12-02 07:40:01.626403] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:12:36.053 [2024-12-02 07:40:01.626408] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:12:36.053 [2024-12-02 07:40:01.626413] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:12:36.053 [2024-12-02 07:40:01.626418] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:12:36.053 [2024-12-02 07:40:01.626424] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:12:36.053 [2024-12-02 07:40:01.626429] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.626443] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.626452] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626456] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.626469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.053 [2024-12-02 07:40:01.626491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.053 [2024-12-02 07:40:01.626693] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.053 [2024-12-02 07:40:01.626700] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.053 [2024-12-02 07:40:01.626718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2289f30) on tqpair=0x222bd30 00:12:36.053 [2024-12-02 07:40:01.626730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626734] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.626744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.053 [2024-12-02 07:40:01.626750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626758] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.626764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.053 [2024-12-02 07:40:01.626770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626774] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626777] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.626783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.053 [2024-12-02 07:40:01.626789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626793] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.626802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.053 [2024-12-02 07:40:01.626807] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.626819] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.626826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.626834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x222bd30) 00:12:36.053 [2024-12-02 07:40:01.626841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.053 [2024-12-02 07:40:01.626860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2289f30, cid 0, qid 0 00:12:36.053 [2024-12-02 07:40:01.626867] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a090, cid 1, qid 0 00:12:36.053 [2024-12-02 07:40:01.626872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a1f0, cid 2, qid 0 00:12:36.053 [2024-12-02 07:40:01.626877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.053 [2024-12-02 07:40:01.626881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4b0, cid 4, qid 0 00:12:36.053 [2024-12-02 07:40:01.627161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.053 [2024-12-02 07:40:01.627175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.053 [2024-12-02 07:40:01.627180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.053 [2024-12-02 07:40:01.627184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a4b0) on tqpair=0x222bd30 00:12:36.053 [2024-12-02 07:40:01.627190] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:12:36.053 [2024-12-02 07:40:01.627196] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.627205] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.627215] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:36.053 [2024-12-02 07:40:01.627222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627226] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627230] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x222bd30) 00:12:36.054 [2024-12-02 07:40:01.627238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.054 [2024-12-02 07:40:01.627256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4b0, cid 4, qid 0 00:12:36.054 [2024-12-02 07:40:01.627333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.054 [2024-12-02 07:40:01.627341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.054 [2024-12-02 07:40:01.627345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a4b0) on tqpair=0x222bd30 00:12:36.054 [2024-12-02 07:40:01.627409] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.627421] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.627430] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627434] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627438] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x222bd30) 00:12:36.054 [2024-12-02 07:40:01.627445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.054 [2024-12-02 07:40:01.627465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4b0, cid 4, qid 0 00:12:36.054 [2024-12-02 07:40:01.627878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.054 [2024-12-02 07:40:01.627893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.054 [2024-12-02 07:40:01.627897] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627901] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=4096, cccid=4 00:12:36.054 [2024-12-02 07:40:01.627906] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a4b0) on tqpair(0x222bd30): expected_datao=0, payload_size=4096 00:12:36.054 [2024-12-02 07:40:01.627914] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627918] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.054 [2024-12-02 07:40:01.627933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.054 [2024-12-02 07:40:01.627936] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627940] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a4b0) on tqpair=0x222bd30 00:12:36.054 [2024-12-02 07:40:01.627956] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:12:36.054 [2024-12-02 07:40:01.627966] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.627977] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.627985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627989] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.627993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x222bd30) 00:12:36.054 [2024-12-02 07:40:01.628000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.054 [2024-12-02 07:40:01.628019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4b0, cid 4, qid 0 00:12:36.054 [2024-12-02 07:40:01.628409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.054 [2024-12-02 07:40:01.628424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.054 [2024-12-02 07:40:01.628428] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628432] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=4096, cccid=4 00:12:36.054 [2024-12-02 07:40:01.628437] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a4b0) on tqpair(0x222bd30): expected_datao=0, payload_size=4096 00:12:36.054 [2024-12-02 07:40:01.628445] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628449] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.054 [2024-12-02 07:40:01.628464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.054 [2024-12-02 07:40:01.628468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a4b0) on tqpair=0x222bd30 00:12:36.054 [2024-12-02 07:40:01.628488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.628499] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.628508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x222bd30) 00:12:36.054 [2024-12-02 07:40:01.628524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.054 [2024-12-02 07:40:01.628544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4b0, cid 4, qid 0 00:12:36.054 [2024-12-02 07:40:01.628947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.054 [2024-12-02 07:40:01.628962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.054 [2024-12-02 07:40:01.628966] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628970] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=4096, cccid=4 00:12:36.054 [2024-12-02 07:40:01.628975] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a4b0) on tqpair(0x222bd30): expected_datao=0, payload_size=4096 00:12:36.054 [2024-12-02 07:40:01.628983] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628988] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.628996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.054 [2024-12-02 07:40:01.629002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.054 [2024-12-02 07:40:01.629005] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629009] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a4b0) on tqpair=0x222bd30 00:12:36.054 [2024-12-02 07:40:01.629019] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.629028] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.629040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.629047] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.629052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.629057] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:12:36.054 [2024-12-02 07:40:01.629062] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:12:36.054 [2024-12-02 07:40:01.629067] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:12:36.054 [2024-12-02 07:40:01.629081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629086] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629090] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x222bd30) 00:12:36.054 [2024-12-02 07:40:01.629097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.054 [2024-12-02 07:40:01.629104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x222bd30) 00:12:36.054 [2024-12-02 07:40:01.629118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.054 [2024-12-02 07:40:01.629141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4b0, cid 4, qid 0 00:12:36.054 [2024-12-02 07:40:01.629148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a610, cid 5, qid 0 00:12:36.054 [2024-12-02 07:40:01.629617] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.054 [2024-12-02 07:40:01.629629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.054 [2024-12-02 07:40:01.629634] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629638] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a4b0) on tqpair=0x222bd30 00:12:36.054 [2024-12-02 07:40:01.629646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.054 [2024-12-02 07:40:01.629653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.054 [2024-12-02 07:40:01.629672] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629675] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a610) on tqpair=0x222bd30 00:12:36.054 [2024-12-02 07:40:01.629687] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629691] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.629695] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x222bd30) 00:12:36.054 [2024-12-02 07:40:01.629702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.054 [2024-12-02 07:40:01.629721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a610, cid 5, qid 0 00:12:36.054 [2024-12-02 07:40:01.630017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.054 [2024-12-02 07:40:01.630028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.054 [2024-12-02 07:40:01.630032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.630036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a610) on tqpair=0x222bd30 00:12:36.054 [2024-12-02 07:40:01.630048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.054 [2024-12-02 07:40:01.630062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630082] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x222bd30) 00:12:36.055 [2024-12-02 07:40:01.630089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.055 [2024-12-02 07:40:01.630107] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a610, cid 5, qid 0 00:12:36.055 [2024-12-02 07:40:01.630164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.055 [2024-12-02 07:40:01.630171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.055 [2024-12-02 07:40:01.630175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630179] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a610) on tqpair=0x222bd30 00:12:36.055 [2024-12-02 07:40:01.630190] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x222bd30) 00:12:36.055 [2024-12-02 07:40:01.630205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.055 [2024-12-02 07:40:01.630221] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a610, cid 5, qid 0 00:12:36.055 [2024-12-02 07:40:01.630767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.055 [2024-12-02 07:40:01.630781] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.055 [2024-12-02 07:40:01.630786] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630790] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a610) on tqpair=0x222bd30 00:12:36.055 [2024-12-02 07:40:01.630804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x222bd30) 00:12:36.055 [2024-12-02 07:40:01.630821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.055 [2024-12-02 07:40:01.630829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630833] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x222bd30) 00:12:36.055 [2024-12-02 07:40:01.630843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.055 [2024-12-02 07:40:01.630865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x222bd30) 00:12:36.055 [2024-12-02 07:40:01.630878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.055 [2024-12-02 07:40:01.630885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.630892] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x222bd30) 00:12:36.055 [2024-12-02 07:40:01.630898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.055 [2024-12-02 07:40:01.630934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a610, cid 5, qid 0 00:12:36.055 [2024-12-02 07:40:01.630941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a4b0, cid 4, qid 0 00:12:36.055 [2024-12-02 07:40:01.630945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a770, cid 6, qid 0 00:12:36.055 [2024-12-02 07:40:01.630950] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a8d0, cid 7, qid 0 00:12:36.055 [2024-12-02 07:40:01.635340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.055 [2024-12-02 07:40:01.635358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.055 [2024-12-02 07:40:01.635378] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635382] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=8192, cccid=5 00:12:36.055 [2024-12-02 07:40:01.635387] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a610) on tqpair(0x222bd30): expected_datao=0, payload_size=8192 00:12:36.055 [2024-12-02 07:40:01.635395] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635399] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.055 [2024-12-02 07:40:01.635411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.055 [2024-12-02 07:40:01.635414] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635418] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=512, cccid=4 00:12:36.055 [2024-12-02 07:40:01.635422] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a4b0) on tqpair(0x222bd30): expected_datao=0, payload_size=512 00:12:36.055 [2024-12-02 07:40:01.635429] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635433] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.055 [2024-12-02 07:40:01.635444] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.055 [2024-12-02 07:40:01.635447] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635451] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=512, cccid=6 00:12:36.055 [2024-12-02 07:40:01.635456] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a770) on tqpair(0x222bd30): expected_datao=0, payload_size=512 00:12:36.055 [2024-12-02 07:40:01.635462] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635466] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635471] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:36.055 [2024-12-02 07:40:01.635477] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:36.055 [2024-12-02 07:40:01.635480] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:36.055 [2024-12-02 07:40:01.635484] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x222bd30): datao=0, datal=4096, cccid=7 00:12:36.055 [2024-12-02 07:40:01.635488] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x228a8d0) on tqpair(0x222bd30): expected_datao=0, payload_size=4096 00:12:36.055 ===================================================== 00:12:36.055 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:36.055 ===================================================== 00:12:36.055 Controller Capabilities/Features 00:12:36.055 ================================ 00:12:36.055 Vendor ID: 8086 00:12:36.055 Subsystem Vendor ID: 8086 00:12:36.055 Serial Number: SPDK00000000000001 00:12:36.055 Model Number: SPDK bdev Controller 00:12:36.055 Firmware Version: 24.01.1 00:12:36.055 Recommended Arb Burst: 6 00:12:36.055 IEEE OUI Identifier: e4 d2 5c 00:12:36.055 Multi-path I/O 00:12:36.055 May have multiple subsystem ports: Yes 00:12:36.055 May have multiple controllers: Yes 00:12:36.055 Associated with SR-IOV VF: No 00:12:36.055 Max Data Transfer Size: 131072 00:12:36.055 Max Number of Namespaces: 32 00:12:36.055 Max Number of I/O Queues: 127 00:12:36.055 NVMe Specification Version (VS): 1.3 00:12:36.055 NVMe Specification Version (Identify): 1.3 00:12:36.055 Maximum Queue Entries: 128 00:12:36.055 Contiguous Queues Required: Yes 00:12:36.055 Arbitration Mechanisms Supported 00:12:36.055 Weighted Round Robin: Not Supported 00:12:36.055 Vendor Specific: Not Supported 00:12:36.055 Reset Timeout: 15000 ms 00:12:36.055 Doorbell Stride: 4 bytes 00:12:36.055 NVM Subsystem Reset: Not Supported 00:12:36.055 Command Sets Supported 00:12:36.055 NVM Command Set: Supported 00:12:36.055 Boot Partition: Not Supported 00:12:36.055 Memory Page Size Minimum: 4096 bytes 00:12:36.055 Memory Page Size Maximum: 4096 bytes 00:12:36.055 Persistent Memory Region: Not Supported 00:12:36.055 Optional Asynchronous Events Supported 00:12:36.055 Namespace Attribute Notices: Supported 00:12:36.055 Firmware Activation Notices: Not Supported 00:12:36.055 ANA Change Notices: Not Supported 00:12:36.055 PLE Aggregate Log Change Notices: Not Supported 00:12:36.055 LBA Status Info Alert Notices: Not Supported 00:12:36.055 EGE Aggregate Log Change Notices: Not Supported 00:12:36.055 Normal NVM Subsystem Shutdown event: Not Supported 00:12:36.055 Zone Descriptor Change Notices: Not Supported 00:12:36.055 Discovery Log Change Notices: Not Supported 00:12:36.055 Controller Attributes 00:12:36.055 128-bit Host Identifier: Supported 00:12:36.055 Non-Operational Permissive Mode: Not Supported 00:12:36.055 NVM Sets: Not Supported 00:12:36.055 Read Recovery Levels: Not Supported 00:12:36.055 Endurance Groups: Not Supported 00:12:36.055 Predictable Latency Mode: Not Supported 00:12:36.055 Traffic Based Keep ALive: Not Supported 00:12:36.055 Namespace Granularity: Not Supported 00:12:36.055 SQ Associations: Not Supported 00:12:36.055 UUID List: Not Supported 00:12:36.055 Multi-Domain Subsystem: Not Supported 00:12:36.055 Fixed Capacity Management: Not Supported 00:12:36.055 Variable Capacity Management: Not Supported 00:12:36.055 Delete Endurance Group: Not Supported 00:12:36.055 Delete NVM Set: Not Supported 00:12:36.055 Extended LBA Formats Supported: Not Supported 00:12:36.055 Flexible Data Placement Supported: Not Supported 00:12:36.055 00:12:36.055 Controller Memory Buffer Support 00:12:36.055 ================================ 00:12:36.055 Supported: No 00:12:36.055 00:12:36.055 Persistent Memory Region Support 00:12:36.055 ================================ 00:12:36.055 Supported: No 00:12:36.055 00:12:36.055 Admin Command Set Attributes 00:12:36.055 ============================ 00:12:36.055 Security Send/Receive: Not Supported 00:12:36.055 Format NVM: Not Supported 00:12:36.056 Firmware Activate/Download: Not Supported 00:12:36.056 Namespace Management: Not Supported 00:12:36.056 Device Self-Test: Not Supported 00:12:36.056 Directives: Not Supported 00:12:36.056 NVMe-MI: Not Supported 00:12:36.056 Virtualization Management: Not Supported 00:12:36.056 Doorbell Buffer Config: Not Supported 00:12:36.056 Get LBA Status Capability: Not Supported 00:12:36.056 Command & Feature Lockdown Capability: Not Supported 00:12:36.056 Abort Command Limit: 4 00:12:36.056 Async Event Request Limit: 4 00:12:36.056 Number of Firmware Slots: N/A 00:12:36.056 Firmware Slot 1 Read-Only: N/A 00:12:36.056 Firmware Activation Without Reset: [2024-12-02 07:40:01.635495] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635499] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.056 [2024-12-02 07:40:01.635510] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.056 [2024-12-02 07:40:01.635513] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635517] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a610) on tqpair=0x222bd30 00:12:36.056 [2024-12-02 07:40:01.635534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.056 [2024-12-02 07:40:01.635541] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.056 [2024-12-02 07:40:01.635544] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635548] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a4b0) on tqpair=0x222bd30 00:12:36.056 [2024-12-02 07:40:01.635558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.056 [2024-12-02 07:40:01.635564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.056 [2024-12-02 07:40:01.635568] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a770) on tqpair=0x222bd30 00:12:36.056 [2024-12-02 07:40:01.635579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.056 [2024-12-02 07:40:01.635585] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.056 [2024-12-02 07:40:01.635588] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635592] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a8d0) on tqpair=0x222bd30 00:12:36.056 N/A 00:12:36.056 Multiple Update Detection Support: N/A 00:12:36.056 Firmware Update Granularity: No Information Provided 00:12:36.056 Per-Namespace SMART Log: No 00:12:36.056 Asymmetric Namespace Access Log Page: Not Supported 00:12:36.056 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:12:36.056 Command Effects Log Page: Supported 00:12:36.056 Get Log Page Extended Data: Supported 00:12:36.056 Telemetry Log Pages: Not Supported 00:12:36.056 Persistent Event Log Pages: Not Supported 00:12:36.056 Supported Log Pages Log Page: May Support 00:12:36.056 Commands Supported & Effects Log Page: Not Supported 00:12:36.056 Feature Identifiers & Effects Log Page:May Support 00:12:36.056 NVMe-MI Commands & Effects Log Page: May Support 00:12:36.056 Data Area 4 for Telemetry Log: Not Supported 00:12:36.056 Error Log Page Entries Supported: 128 00:12:36.056 Keep Alive: Supported 00:12:36.056 Keep Alive Granularity: 10000 ms 00:12:36.056 00:12:36.056 NVM Command Set Attributes 00:12:36.056 ========================== 00:12:36.056 Submission Queue Entry Size 00:12:36.056 Max: 64 00:12:36.056 Min: 64 00:12:36.056 Completion Queue Entry Size 00:12:36.056 Max: 16 00:12:36.056 Min: 16 00:12:36.056 Number of Namespaces: 32 00:12:36.056 Compare Command: Supported 00:12:36.056 Write Uncorrectable Command: Not Supported 00:12:36.056 Dataset Management Command: Supported 00:12:36.056 Write Zeroes Command: Supported 00:12:36.056 Set Features Save Field: Not Supported 00:12:36.056 Reservations: Supported 00:12:36.056 Timestamp: Not Supported 00:12:36.056 Copy: Supported 00:12:36.056 Volatile Write Cache: Present 00:12:36.056 Atomic Write Unit (Normal): 1 00:12:36.056 Atomic Write Unit (PFail): 1 00:12:36.056 Atomic Compare & Write Unit: 1 00:12:36.056 Fused Compare & Write: Supported 00:12:36.056 Scatter-Gather List 00:12:36.056 SGL Command Set: Supported 00:12:36.056 SGL Keyed: Supported 00:12:36.056 SGL Bit Bucket Descriptor: Not Supported 00:12:36.056 SGL Metadata Pointer: Not Supported 00:12:36.056 Oversized SGL: Not Supported 00:12:36.056 SGL Metadata Address: Not Supported 00:12:36.056 SGL Offset: Supported 00:12:36.056 Transport SGL Data Block: Not Supported 00:12:36.056 Replay Protected Memory Block: Not Supported 00:12:36.056 00:12:36.056 Firmware Slot Information 00:12:36.056 ========================= 00:12:36.056 Active slot: 1 00:12:36.056 Slot 1 Firmware Revision: 24.01.1 00:12:36.056 00:12:36.056 00:12:36.056 Commands Supported and Effects 00:12:36.056 ============================== 00:12:36.056 Admin Commands 00:12:36.056 -------------- 00:12:36.056 Get Log Page (02h): Supported 00:12:36.056 Identify (06h): Supported 00:12:36.056 Abort (08h): Supported 00:12:36.056 Set Features (09h): Supported 00:12:36.056 Get Features (0Ah): Supported 00:12:36.056 Asynchronous Event Request (0Ch): Supported 00:12:36.056 Keep Alive (18h): Supported 00:12:36.056 I/O Commands 00:12:36.056 ------------ 00:12:36.056 Flush (00h): Supported LBA-Change 00:12:36.056 Write (01h): Supported LBA-Change 00:12:36.056 Read (02h): Supported 00:12:36.056 Compare (05h): Supported 00:12:36.056 Write Zeroes (08h): Supported LBA-Change 00:12:36.056 Dataset Management (09h): Supported LBA-Change 00:12:36.056 Copy (19h): Supported LBA-Change 00:12:36.056 Unknown (79h): Supported LBA-Change 00:12:36.056 Unknown (7Ah): Supported 00:12:36.056 00:12:36.056 Error Log 00:12:36.056 ========= 00:12:36.056 00:12:36.056 Arbitration 00:12:36.056 =========== 00:12:36.056 Arbitration Burst: 1 00:12:36.056 00:12:36.056 Power Management 00:12:36.056 ================ 00:12:36.056 Number of Power States: 1 00:12:36.056 Current Power State: Power State #0 00:12:36.056 Power State #0: 00:12:36.056 Max Power: 0.00 W 00:12:36.056 Non-Operational State: Operational 00:12:36.056 Entry Latency: Not Reported 00:12:36.056 Exit Latency: Not Reported 00:12:36.056 Relative Read Throughput: 0 00:12:36.056 Relative Read Latency: 0 00:12:36.056 Relative Write Throughput: 0 00:12:36.056 Relative Write Latency: 0 00:12:36.056 Idle Power: Not Reported 00:12:36.056 Active Power: Not Reported 00:12:36.056 Non-Operational Permissive Mode: Not Supported 00:12:36.056 00:12:36.056 Health Information 00:12:36.056 ================== 00:12:36.056 Critical Warnings: 00:12:36.056 Available Spare Space: OK 00:12:36.056 Temperature: OK 00:12:36.056 Device Reliability: OK 00:12:36.056 Read Only: No 00:12:36.056 Volatile Memory Backup: OK 00:12:36.056 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:36.056 Temperature Threshold: [2024-12-02 07:40:01.635710] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635717] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.635721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x222bd30) 00:12:36.056 [2024-12-02 07:40:01.635729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.056 [2024-12-02 07:40:01.635755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a8d0, cid 7, qid 0 00:12:36.056 [2024-12-02 07:40:01.636398] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.056 [2024-12-02 07:40:01.636414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.056 [2024-12-02 07:40:01.636418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.056 [2024-12-02 07:40:01.636422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a8d0) on tqpair=0x222bd30 00:12:36.056 [2024-12-02 07:40:01.636467] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:12:36.056 [2024-12-02 07:40:01.636481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.057 [2024-12-02 07:40:01.636488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.057 [2024-12-02 07:40:01.636494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.057 [2024-12-02 07:40:01.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.057 [2024-12-02 07:40:01.636509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.636513] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.636517] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.636525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.636547] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.636787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.636802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.636806] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.636810] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.636819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.636824] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.636827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.636835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.636856] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.637014] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.637028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.637032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.637042] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:12:36.057 [2024-12-02 07:40:01.637047] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:12:36.057 [2024-12-02 07:40:01.637057] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637066] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.637073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.637091] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.637399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.637413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.637417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.637434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.637449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.637467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.637728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.637742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.637746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.637763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.637771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.637778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.637795] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.638059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.638073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.638078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.638093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.638109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.638127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.638372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.638386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.638391] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638395] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.638406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.638422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.638440] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.638692] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.638706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.638710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638714] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.638725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638730] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.638734] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.638741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.638758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.638985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.638996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.639000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.639004] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.639016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.639020] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.639024] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.639031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.639048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.639278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.639288] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.639293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.643359] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.643393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.643398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.643402] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x222bd30) 00:12:36.057 [2024-12-02 07:40:01.643410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:36.057 [2024-12-02 07:40:01.643432] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x228a350, cid 3, qid 0 00:12:36.057 [2024-12-02 07:40:01.643487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:36.057 [2024-12-02 07:40:01.643494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:36.057 [2024-12-02 07:40:01.643498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:36.057 [2024-12-02 07:40:01.643501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x228a350) on tqpair=0x222bd30 00:12:36.057 [2024-12-02 07:40:01.643509] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:12:36.057 0 Kelvin (-273 Celsius) 00:12:36.057 Available Spare: 0% 00:12:36.057 Available Spare Threshold: 0% 00:12:36.057 Life Percentage Used: 0% 00:12:36.057 Data Units Read: 0 00:12:36.057 Data Units Written: 0 00:12:36.057 Host Read Commands: 0 00:12:36.057 Host Write Commands: 0 00:12:36.057 Controller Busy Time: 0 minutes 00:12:36.057 Power Cycles: 0 00:12:36.057 Power On Hours: 0 hours 00:12:36.057 Unsafe Shutdowns: 0 00:12:36.057 Unrecoverable Media Errors: 0 00:12:36.057 Lifetime Error Log Entries: 0 00:12:36.057 Warning Temperature Time: 0 minutes 00:12:36.057 Critical Temperature Time: 0 minutes 00:12:36.057 00:12:36.057 Number of Queues 00:12:36.058 ================ 00:12:36.058 Number of I/O Submission Queues: 127 00:12:36.058 Number of I/O Completion Queues: 127 00:12:36.058 00:12:36.058 Active Namespaces 00:12:36.058 ================= 00:12:36.058 Namespace ID:1 00:12:36.058 Error Recovery Timeout: Unlimited 00:12:36.058 Command Set Identifier: NVM (00h) 00:12:36.058 Deallocate: Supported 00:12:36.058 Deallocated/Unwritten Error: Not Supported 00:12:36.058 Deallocated Read Value: Unknown 00:12:36.058 Deallocate in Write Zeroes: Not Supported 00:12:36.058 Deallocated Guard Field: 0xFFFF 00:12:36.058 Flush: Supported 00:12:36.058 Reservation: Supported 00:12:36.058 Namespace Sharing Capabilities: Multiple Controllers 00:12:36.058 Size (in LBAs): 131072 (0GiB) 00:12:36.058 Capacity (in LBAs): 131072 (0GiB) 00:12:36.058 Utilization (in LBAs): 131072 (0GiB) 00:12:36.058 NGUID: ABCDEF0123456789ABCDEF0123456789 00:12:36.058 EUI64: ABCDEF0123456789 00:12:36.058 UUID: 95c18b1a-5062-4a84-9b03-7a494f744d2d 00:12:36.058 Thin Provisioning: Not Supported 00:12:36.058 Per-NS Atomic Units: Yes 00:12:36.058 Atomic Boundary Size (Normal): 0 00:12:36.058 Atomic Boundary Size (PFail): 0 00:12:36.058 Atomic Boundary Offset: 0 00:12:36.058 Maximum Single Source Range Length: 65535 00:12:36.058 Maximum Copy Length: 65535 00:12:36.058 Maximum Source Range Count: 1 00:12:36.058 NGUID/EUI64 Never Reused: No 00:12:36.058 Namespace Write Protected: No 00:12:36.058 Number of LBA Formats: 1 00:12:36.058 Current LBA Format: LBA Format #00 00:12:36.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:36.058 00:12:36.058 07:40:01 -- host/identify.sh@51 -- # sync 00:12:36.317 07:40:01 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.317 07:40:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.317 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:12:36.317 07:40:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.317 07:40:01 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:12:36.317 07:40:01 -- host/identify.sh@56 -- # nvmftestfini 00:12:36.317 07:40:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:36.317 07:40:01 -- nvmf/common.sh@116 -- # sync 00:12:36.317 07:40:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:36.317 07:40:01 -- nvmf/common.sh@119 -- # set +e 00:12:36.317 07:40:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:36.317 07:40:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:36.317 rmmod nvme_tcp 00:12:36.317 rmmod nvme_fabrics 00:12:36.317 rmmod nvme_keyring 00:12:36.317 07:40:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:36.317 07:40:01 -- nvmf/common.sh@123 -- # set -e 00:12:36.317 07:40:01 -- nvmf/common.sh@124 -- # return 0 00:12:36.318 07:40:01 -- nvmf/common.sh@477 -- # '[' -n 68303 ']' 00:12:36.318 07:40:01 -- nvmf/common.sh@478 -- # killprocess 68303 00:12:36.318 07:40:01 -- common/autotest_common.sh@936 -- # '[' -z 68303 ']' 00:12:36.318 07:40:01 -- common/autotest_common.sh@940 -- # kill -0 68303 00:12:36.318 07:40:01 -- common/autotest_common.sh@941 -- # uname 00:12:36.318 07:40:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.318 07:40:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68303 00:12:36.318 killing process with pid 68303 00:12:36.318 07:40:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:36.318 07:40:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:36.318 07:40:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68303' 00:12:36.318 07:40:01 -- common/autotest_common.sh@955 -- # kill 68303 00:12:36.318 [2024-12-02 07:40:01.804134] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:36.318 07:40:01 -- common/autotest_common.sh@960 -- # wait 68303 00:12:36.577 07:40:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:36.577 07:40:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:36.577 07:40:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:36.577 07:40:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.577 07:40:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:36.577 07:40:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.577 07:40:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.577 07:40:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.577 07:40:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:36.577 00:12:36.577 real 0m2.485s 00:12:36.577 user 0m6.809s 00:12:36.577 sys 0m0.582s 00:12:36.577 07:40:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:36.578 07:40:02 -- common/autotest_common.sh@10 -- # set +x 00:12:36.578 ************************************ 00:12:36.578 END TEST nvmf_identify 00:12:36.578 ************************************ 00:12:36.578 07:40:02 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:12:36.578 07:40:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:36.578 07:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.578 07:40:02 -- common/autotest_common.sh@10 -- # set +x 00:12:36.578 ************************************ 00:12:36.578 START TEST nvmf_perf 00:12:36.578 ************************************ 00:12:36.578 07:40:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:12:36.578 * Looking for test storage... 00:12:36.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:36.578 07:40:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:36.578 07:40:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:36.578 07:40:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:36.578 07:40:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:36.578 07:40:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:36.578 07:40:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:36.578 07:40:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:36.578 07:40:02 -- scripts/common.sh@335 -- # IFS=.-: 00:12:36.578 07:40:02 -- scripts/common.sh@335 -- # read -ra ver1 00:12:36.578 07:40:02 -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.578 07:40:02 -- scripts/common.sh@336 -- # read -ra ver2 00:12:36.578 07:40:02 -- scripts/common.sh@337 -- # local 'op=<' 00:12:36.578 07:40:02 -- scripts/common.sh@339 -- # ver1_l=2 00:12:36.578 07:40:02 -- scripts/common.sh@340 -- # ver2_l=1 00:12:36.578 07:40:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:36.578 07:40:02 -- scripts/common.sh@343 -- # case "$op" in 00:12:36.578 07:40:02 -- scripts/common.sh@344 -- # : 1 00:12:36.578 07:40:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:36.578 07:40:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.578 07:40:02 -- scripts/common.sh@364 -- # decimal 1 00:12:36.578 07:40:02 -- scripts/common.sh@352 -- # local d=1 00:12:36.578 07:40:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.837 07:40:02 -- scripts/common.sh@354 -- # echo 1 00:12:36.837 07:40:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:36.837 07:40:02 -- scripts/common.sh@365 -- # decimal 2 00:12:36.837 07:40:02 -- scripts/common.sh@352 -- # local d=2 00:12:36.837 07:40:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.837 07:40:02 -- scripts/common.sh@354 -- # echo 2 00:12:36.837 07:40:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:36.837 07:40:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:36.837 07:40:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:36.838 07:40:02 -- scripts/common.sh@367 -- # return 0 00:12:36.838 07:40:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.838 07:40:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:36.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.838 --rc genhtml_branch_coverage=1 00:12:36.838 --rc genhtml_function_coverage=1 00:12:36.838 --rc genhtml_legend=1 00:12:36.838 --rc geninfo_all_blocks=1 00:12:36.838 --rc geninfo_unexecuted_blocks=1 00:12:36.838 00:12:36.838 ' 00:12:36.838 07:40:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:36.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.838 --rc genhtml_branch_coverage=1 00:12:36.838 --rc genhtml_function_coverage=1 00:12:36.838 --rc genhtml_legend=1 00:12:36.838 --rc geninfo_all_blocks=1 00:12:36.838 --rc geninfo_unexecuted_blocks=1 00:12:36.838 00:12:36.838 ' 00:12:36.838 07:40:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:36.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.838 --rc genhtml_branch_coverage=1 00:12:36.838 --rc genhtml_function_coverage=1 00:12:36.838 --rc genhtml_legend=1 00:12:36.838 --rc geninfo_all_blocks=1 00:12:36.838 --rc geninfo_unexecuted_blocks=1 00:12:36.838 00:12:36.838 ' 00:12:36.838 07:40:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:36.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.838 --rc genhtml_branch_coverage=1 00:12:36.838 --rc genhtml_function_coverage=1 00:12:36.838 --rc genhtml_legend=1 00:12:36.838 --rc geninfo_all_blocks=1 00:12:36.838 --rc geninfo_unexecuted_blocks=1 00:12:36.838 00:12:36.838 ' 00:12:36.838 07:40:02 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.838 07:40:02 -- nvmf/common.sh@7 -- # uname -s 00:12:36.838 07:40:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.838 07:40:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.838 07:40:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.838 07:40:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.838 07:40:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.838 07:40:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.838 07:40:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.838 07:40:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.838 07:40:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.838 07:40:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.838 07:40:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:12:36.838 07:40:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:12:36.838 07:40:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.838 07:40:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.838 07:40:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.838 07:40:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.838 07:40:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.838 07:40:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.838 07:40:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.838 07:40:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.838 07:40:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.838 07:40:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.838 07:40:02 -- paths/export.sh@5 -- # export PATH 00:12:36.838 07:40:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.838 07:40:02 -- nvmf/common.sh@46 -- # : 0 00:12:36.838 07:40:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:36.838 07:40:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:36.838 07:40:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:36.838 07:40:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.838 07:40:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.838 07:40:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:36.838 07:40:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:36.838 07:40:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:36.838 07:40:02 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:36.838 07:40:02 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:36.838 07:40:02 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.838 07:40:02 -- host/perf.sh@17 -- # nvmftestinit 00:12:36.838 07:40:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:36.838 07:40:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.838 07:40:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:36.838 07:40:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:36.838 07:40:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:36.838 07:40:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.838 07:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.838 07:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.838 07:40:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:36.838 07:40:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:36.838 07:40:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:36.838 07:40:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:36.838 07:40:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:36.838 07:40:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:36.838 07:40:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.838 07:40:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.838 07:40:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.838 07:40:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:36.838 07:40:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.838 07:40:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.838 07:40:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.838 07:40:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.838 07:40:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.838 07:40:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.838 07:40:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.838 07:40:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.838 07:40:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:36.838 07:40:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:36.838 Cannot find device "nvmf_tgt_br" 00:12:36.838 07:40:02 -- nvmf/common.sh@154 -- # true 00:12:36.838 07:40:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.838 Cannot find device "nvmf_tgt_br2" 00:12:36.838 07:40:02 -- nvmf/common.sh@155 -- # true 00:12:36.838 07:40:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:36.838 07:40:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:36.838 Cannot find device "nvmf_tgt_br" 00:12:36.838 07:40:02 -- nvmf/common.sh@157 -- # true 00:12:36.838 07:40:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:36.838 Cannot find device "nvmf_tgt_br2" 00:12:36.838 07:40:02 -- nvmf/common.sh@158 -- # true 00:12:36.838 07:40:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:36.838 07:40:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:36.838 07:40:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.838 07:40:02 -- nvmf/common.sh@161 -- # true 00:12:36.838 07:40:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.838 07:40:02 -- nvmf/common.sh@162 -- # true 00:12:36.838 07:40:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.838 07:40:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.838 07:40:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.838 07:40:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.838 07:40:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.838 07:40:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.838 07:40:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.838 07:40:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:37.098 07:40:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:37.098 07:40:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:37.098 07:40:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:37.098 07:40:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:37.098 07:40:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:37.098 07:40:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.098 07:40:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.098 07:40:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.098 07:40:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:37.098 07:40:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:37.098 07:40:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.098 07:40:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.098 07:40:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.098 07:40:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.098 07:40:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.098 07:40:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:37.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:12:37.098 00:12:37.098 --- 10.0.0.2 ping statistics --- 00:12:37.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.098 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:37.098 07:40:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:37.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:37.098 00:12:37.098 --- 10.0.0.3 ping statistics --- 00:12:37.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.098 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:37.098 07:40:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:37.098 00:12:37.098 --- 10.0.0.1 ping statistics --- 00:12:37.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.098 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:37.098 07:40:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.098 07:40:02 -- nvmf/common.sh@421 -- # return 0 00:12:37.099 07:40:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:37.099 07:40:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.099 07:40:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:37.099 07:40:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:37.099 07:40:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.099 07:40:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:37.099 07:40:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:37.099 07:40:02 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:12:37.099 07:40:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:37.099 07:40:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.099 07:40:02 -- common/autotest_common.sh@10 -- # set +x 00:12:37.099 07:40:02 -- nvmf/common.sh@469 -- # nvmfpid=68518 00:12:37.099 07:40:02 -- nvmf/common.sh@470 -- # waitforlisten 68518 00:12:37.099 07:40:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.099 07:40:02 -- common/autotest_common.sh@829 -- # '[' -z 68518 ']' 00:12:37.099 07:40:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.099 07:40:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.099 07:40:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.099 07:40:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.099 07:40:02 -- common/autotest_common.sh@10 -- # set +x 00:12:37.099 [2024-12-02 07:40:02.640056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.099 [2024-12-02 07:40:02.640151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.358 [2024-12-02 07:40:02.772847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.358 [2024-12-02 07:40:02.821103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.358 [2024-12-02 07:40:02.821245] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.358 [2024-12-02 07:40:02.821257] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.358 [2024-12-02 07:40:02.821265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.358 [2024-12-02 07:40:02.821484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.358 [2024-12-02 07:40:02.821631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.358 [2024-12-02 07:40:02.821637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.358 [2024-12-02 07:40:02.821566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.297 07:40:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.297 07:40:03 -- common/autotest_common.sh@862 -- # return 0 00:12:38.297 07:40:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:38.297 07:40:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.297 07:40:03 -- common/autotest_common.sh@10 -- # set +x 00:12:38.297 07:40:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.297 07:40:03 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:38.297 07:40:03 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:12:38.558 07:40:04 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:12:38.558 07:40:04 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:12:38.816 07:40:04 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:12:38.816 07:40:04 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:39.076 07:40:04 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:12:39.076 07:40:04 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:12:39.076 07:40:04 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:12:39.076 07:40:04 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:12:39.076 07:40:04 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:39.335 [2024-12-02 07:40:04.717877] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.335 07:40:04 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:39.335 07:40:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:12:39.335 07:40:04 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:39.905 07:40:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:12:39.905 07:40:05 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:12:39.905 07:40:05 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.163 [2024-12-02 07:40:05.646979] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.163 07:40:05 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.422 07:40:05 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:12:40.422 07:40:05 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:40.422 07:40:05 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:12:40.422 07:40:05 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:41.357 Initializing NVMe Controllers 00:12:41.357 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:41.357 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:41.357 Initialization complete. Launching workers. 00:12:41.357 ======================================================== 00:12:41.357 Latency(us) 00:12:41.357 Device Information : IOPS MiB/s Average min max 00:12:41.357 PCIE (0000:00:06.0) NSID 1 from core 0: 22073.83 86.23 1449.27 360.20 9786.94 00:12:41.357 ======================================================== 00:12:41.357 Total : 22073.83 86.23 1449.27 360.20 9786.94 00:12:41.357 00:12:41.357 07:40:06 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:12:42.734 Initializing NVMe Controllers 00:12:42.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:42.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:42.734 Initialization complete. Launching workers. 00:12:42.734 ======================================================== 00:12:42.734 Latency(us) 00:12:42.734 Device Information : IOPS MiB/s Average min max 00:12:42.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4015.92 15.69 248.72 94.79 7182.86 00:12:42.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8103.95 6268.10 11956.43 00:12:42.734 ======================================================== 00:12:42.734 Total : 4139.92 16.17 484.00 94.79 11956.43 00:12:42.734 00:12:42.735 07:40:08 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:12:44.114 Initializing NVMe Controllers 00:12:44.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:44.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:44.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:44.114 Initialization complete. Launching workers. 00:12:44.114 ======================================================== 00:12:44.114 Latency(us) 00:12:44.114 Device Information : IOPS MiB/s Average min max 00:12:44.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9556.30 37.33 3361.94 423.28 8115.64 00:12:44.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3999.71 15.62 8040.76 6403.10 15468.99 00:12:44.114 ======================================================== 00:12:44.114 Total : 13556.01 52.95 4742.43 423.28 15468.99 00:12:44.114 00:12:44.114 07:40:09 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:12:44.114 07:40:09 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:12:46.719 Initializing NVMe Controllers 00:12:46.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:46.719 Controller IO queue size 128, less than required. 00:12:46.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:46.719 Controller IO queue size 128, less than required. 00:12:46.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:46.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:46.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:46.719 Initialization complete. Launching workers. 00:12:46.719 ======================================================== 00:12:46.719 Latency(us) 00:12:46.719 Device Information : IOPS MiB/s Average min max 00:12:46.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2130.92 532.73 60475.64 29763.44 104615.60 00:12:46.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 706.97 176.74 188258.10 49831.08 300953.74 00:12:46.719 ======================================================== 00:12:46.719 Total : 2837.89 709.47 92308.69 29763.44 300953.74 00:12:46.719 00:12:46.719 07:40:12 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:12:46.978 No valid NVMe controllers or AIO or URING devices found 00:12:46.978 Initializing NVMe Controllers 00:12:46.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:46.978 Controller IO queue size 128, less than required. 00:12:46.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:46.978 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:12:46.978 Controller IO queue size 128, less than required. 00:12:46.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:46.978 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:12:46.978 WARNING: Some requested NVMe devices were skipped 00:12:46.978 07:40:12 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:12:49.512 Initializing NVMe Controllers 00:12:49.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.512 Controller IO queue size 128, less than required. 00:12:49.512 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:49.512 Controller IO queue size 128, less than required. 00:12:49.512 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:49.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:49.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:49.512 Initialization complete. Launching workers. 00:12:49.512 00:12:49.512 ==================== 00:12:49.512 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:12:49.512 TCP transport: 00:12:49.512 polls: 9169 00:12:49.512 idle_polls: 0 00:12:49.512 sock_completions: 9169 00:12:49.512 nvme_completions: 6875 00:12:49.512 submitted_requests: 10497 00:12:49.512 queued_requests: 1 00:12:49.512 00:12:49.512 ==================== 00:12:49.512 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:12:49.512 TCP transport: 00:12:49.512 polls: 9888 00:12:49.512 idle_polls: 0 00:12:49.512 sock_completions: 9888 00:12:49.512 nvme_completions: 6794 00:12:49.512 submitted_requests: 10307 00:12:49.512 queued_requests: 1 00:12:49.512 ======================================================== 00:12:49.512 Latency(us) 00:12:49.512 Device Information : IOPS MiB/s Average min max 00:12:49.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1780.58 445.14 72941.96 37633.86 128425.78 00:12:49.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1760.10 440.02 72951.69 36947.04 131149.70 00:12:49.513 ======================================================== 00:12:49.513 Total : 3540.68 885.17 72946.79 36947.04 131149.70 00:12:49.513 00:12:49.513 07:40:14 -- host/perf.sh@66 -- # sync 00:12:49.513 07:40:14 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.770 07:40:15 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:12:49.770 07:40:15 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:12:49.770 07:40:15 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:12:50.027 07:40:15 -- host/perf.sh@72 -- # ls_guid=e0bd505a-d4c6-4555-9be8-7b9939f0c3c5 00:12:50.027 07:40:15 -- host/perf.sh@73 -- # get_lvs_free_mb e0bd505a-d4c6-4555-9be8-7b9939f0c3c5 00:12:50.027 07:40:15 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e0bd505a-d4c6-4555-9be8-7b9939f0c3c5 00:12:50.027 07:40:15 -- common/autotest_common.sh@1354 -- # local lvs_info 00:12:50.027 07:40:15 -- common/autotest_common.sh@1355 -- # local fc 00:12:50.027 07:40:15 -- common/autotest_common.sh@1356 -- # local cs 00:12:50.027 07:40:15 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:12:50.285 07:40:15 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:12:50.285 { 00:12:50.285 "uuid": "e0bd505a-d4c6-4555-9be8-7b9939f0c3c5", 00:12:50.285 "name": "lvs_0", 00:12:50.285 "base_bdev": "Nvme0n1", 00:12:50.285 "total_data_clusters": 1278, 00:12:50.285 "free_clusters": 1278, 00:12:50.285 "block_size": 4096, 00:12:50.285 "cluster_size": 4194304 00:12:50.285 } 00:12:50.285 ]' 00:12:50.285 07:40:15 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e0bd505a-d4c6-4555-9be8-7b9939f0c3c5") .free_clusters' 00:12:50.285 07:40:15 -- common/autotest_common.sh@1358 -- # fc=1278 00:12:50.285 07:40:15 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e0bd505a-d4c6-4555-9be8-7b9939f0c3c5") .cluster_size' 00:12:50.285 5112 00:12:50.285 07:40:15 -- common/autotest_common.sh@1359 -- # cs=4194304 00:12:50.285 07:40:15 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:12:50.285 07:40:15 -- common/autotest_common.sh@1363 -- # echo 5112 00:12:50.285 07:40:15 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:12:50.285 07:40:15 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0bd505a-d4c6-4555-9be8-7b9939f0c3c5 lbd_0 5112 00:12:50.543 07:40:16 -- host/perf.sh@80 -- # lb_guid=0b651e9d-9337-4925-9785-19da3e87b7cd 00:12:50.543 07:40:16 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 0b651e9d-9337-4925-9785-19da3e87b7cd lvs_n_0 00:12:50.803 07:40:16 -- host/perf.sh@83 -- # ls_nested_guid=1693a200-6fe5-4948-ad23-34a0aef97c25 00:12:50.803 07:40:16 -- host/perf.sh@84 -- # get_lvs_free_mb 1693a200-6fe5-4948-ad23-34a0aef97c25 00:12:50.803 07:40:16 -- common/autotest_common.sh@1353 -- # local lvs_uuid=1693a200-6fe5-4948-ad23-34a0aef97c25 00:12:50.803 07:40:16 -- common/autotest_common.sh@1354 -- # local lvs_info 00:12:50.803 07:40:16 -- common/autotest_common.sh@1355 -- # local fc 00:12:50.803 07:40:16 -- common/autotest_common.sh@1356 -- # local cs 00:12:50.803 07:40:16 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:12:51.062 07:40:16 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:12:51.062 { 00:12:51.062 "uuid": "e0bd505a-d4c6-4555-9be8-7b9939f0c3c5", 00:12:51.062 "name": "lvs_0", 00:12:51.062 "base_bdev": "Nvme0n1", 00:12:51.062 "total_data_clusters": 1278, 00:12:51.062 "free_clusters": 0, 00:12:51.062 "block_size": 4096, 00:12:51.062 "cluster_size": 4194304 00:12:51.062 }, 00:12:51.062 { 00:12:51.062 "uuid": "1693a200-6fe5-4948-ad23-34a0aef97c25", 00:12:51.062 "name": "lvs_n_0", 00:12:51.062 "base_bdev": "0b651e9d-9337-4925-9785-19da3e87b7cd", 00:12:51.062 "total_data_clusters": 1276, 00:12:51.062 "free_clusters": 1276, 00:12:51.062 "block_size": 4096, 00:12:51.062 "cluster_size": 4194304 00:12:51.062 } 00:12:51.062 ]' 00:12:51.062 07:40:16 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="1693a200-6fe5-4948-ad23-34a0aef97c25") .free_clusters' 00:12:51.062 07:40:16 -- common/autotest_common.sh@1358 -- # fc=1276 00:12:51.062 07:40:16 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="1693a200-6fe5-4948-ad23-34a0aef97c25") .cluster_size' 00:12:51.321 5104 00:12:51.321 07:40:16 -- common/autotest_common.sh@1359 -- # cs=4194304 00:12:51.321 07:40:16 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:12:51.321 07:40:16 -- common/autotest_common.sh@1363 -- # echo 5104 00:12:51.321 07:40:16 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:12:51.321 07:40:16 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1693a200-6fe5-4948-ad23-34a0aef97c25 lbd_nest_0 5104 00:12:51.321 07:40:16 -- host/perf.sh@88 -- # lb_nested_guid=4ed5da04-260b-4746-b2cd-205020815c5a 00:12:51.321 07:40:16 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.581 07:40:17 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:12:51.581 07:40:17 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4ed5da04-260b-4746-b2cd-205020815c5a 00:12:51.840 07:40:17 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.099 07:40:17 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:12:52.099 07:40:17 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:12:52.099 07:40:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:12:52.099 07:40:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:12:52.099 07:40:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:12:52.358 No valid NVMe controllers or AIO or URING devices found 00:12:52.358 Initializing NVMe Controllers 00:12:52.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.358 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:12:52.358 WARNING: Some requested NVMe devices were skipped 00:12:52.358 07:40:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:12:52.358 07:40:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:04.564 Initializing NVMe Controllers 00:13:04.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:04.564 Initialization complete. Launching workers. 00:13:04.564 ======================================================== 00:13:04.564 Latency(us) 00:13:04.564 Device Information : IOPS MiB/s Average min max 00:13:04.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 995.59 124.45 1002.11 319.53 8555.66 00:13:04.564 ======================================================== 00:13:04.564 Total : 995.59 124.45 1002.11 319.53 8555.66 00:13:04.564 00:13:04.564 07:40:28 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:04.564 07:40:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:04.564 07:40:28 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:04.564 No valid NVMe controllers or AIO or URING devices found 00:13:04.564 Initializing NVMe Controllers 00:13:04.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.564 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:04.564 WARNING: Some requested NVMe devices were skipped 00:13:04.564 07:40:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:04.564 07:40:28 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:14.542 Initializing NVMe Controllers 00:13:14.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:14.542 Initialization complete. Launching workers. 00:13:14.542 ======================================================== 00:13:14.542 Latency(us) 00:13:14.542 Device Information : IOPS MiB/s Average min max 00:13:14.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1367.10 170.89 23432.01 7107.15 63794.53 00:13:14.542 ======================================================== 00:13:14.542 Total : 1367.10 170.89 23432.01 7107.15 63794.53 00:13:14.542 00:13:14.542 07:40:38 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:14.542 07:40:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:14.542 07:40:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:14.542 No valid NVMe controllers or AIO or URING devices found 00:13:14.542 Initializing NVMe Controllers 00:13:14.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.542 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:14.542 WARNING: Some requested NVMe devices were skipped 00:13:14.542 07:40:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:14.542 07:40:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:24.521 Initializing NVMe Controllers 00:13:24.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.521 Controller IO queue size 128, less than required. 00:13:24.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:24.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:24.521 Initialization complete. Launching workers. 00:13:24.521 ======================================================== 00:13:24.521 Latency(us) 00:13:24.521 Device Information : IOPS MiB/s Average min max 00:13:24.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4300.50 537.56 29784.00 11846.48 61526.32 00:13:24.521 ======================================================== 00:13:24.521 Total : 4300.50 537.56 29784.00 11846.48 61526.32 00:13:24.521 00:13:24.521 07:40:49 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.521 07:40:49 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4ed5da04-260b-4746-b2cd-205020815c5a 00:13:24.521 07:40:50 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:13:24.780 07:40:50 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0b651e9d-9337-4925-9785-19da3e87b7cd 00:13:25.040 07:40:50 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:13:25.299 07:40:50 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:25.299 07:40:50 -- host/perf.sh@114 -- # nvmftestfini 00:13:25.299 07:40:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:25.299 07:40:50 -- nvmf/common.sh@116 -- # sync 00:13:25.299 07:40:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:25.299 07:40:50 -- nvmf/common.sh@119 -- # set +e 00:13:25.299 07:40:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:25.299 07:40:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:25.299 rmmod nvme_tcp 00:13:25.299 rmmod nvme_fabrics 00:13:25.299 rmmod nvme_keyring 00:13:25.299 07:40:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:25.299 07:40:50 -- nvmf/common.sh@123 -- # set -e 00:13:25.299 07:40:50 -- nvmf/common.sh@124 -- # return 0 00:13:25.299 07:40:50 -- nvmf/common.sh@477 -- # '[' -n 68518 ']' 00:13:25.299 07:40:50 -- nvmf/common.sh@478 -- # killprocess 68518 00:13:25.299 07:40:50 -- common/autotest_common.sh@936 -- # '[' -z 68518 ']' 00:13:25.299 07:40:50 -- common/autotest_common.sh@940 -- # kill -0 68518 00:13:25.299 07:40:50 -- common/autotest_common.sh@941 -- # uname 00:13:25.299 07:40:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.299 07:40:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68518 00:13:25.299 killing process with pid 68518 00:13:25.299 07:40:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:25.299 07:40:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:25.299 07:40:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68518' 00:13:25.299 07:40:50 -- common/autotest_common.sh@955 -- # kill 68518 00:13:25.299 07:40:50 -- common/autotest_common.sh@960 -- # wait 68518 00:13:27.210 07:40:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:27.210 07:40:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:27.210 07:40:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:27.210 07:40:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.210 07:40:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:27.210 07:40:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.210 07:40:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.210 07:40:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.210 07:40:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:27.210 00:13:27.210 real 0m50.327s 00:13:27.210 user 3m7.250s 00:13:27.210 sys 0m13.496s 00:13:27.210 07:40:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:27.210 07:40:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.210 ************************************ 00:13:27.210 END TEST nvmf_perf 00:13:27.210 ************************************ 00:13:27.210 07:40:52 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:27.210 07:40:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:27.210 07:40:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.210 07:40:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.210 ************************************ 00:13:27.210 START TEST nvmf_fio_host 00:13:27.210 ************************************ 00:13:27.210 07:40:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:27.210 * Looking for test storage... 00:13:27.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:27.210 07:40:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:27.210 07:40:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:27.210 07:40:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:27.210 07:40:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:27.210 07:40:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:27.210 07:40:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:27.210 07:40:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:27.210 07:40:52 -- scripts/common.sh@335 -- # IFS=.-: 00:13:27.210 07:40:52 -- scripts/common.sh@335 -- # read -ra ver1 00:13:27.210 07:40:52 -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.210 07:40:52 -- scripts/common.sh@336 -- # read -ra ver2 00:13:27.210 07:40:52 -- scripts/common.sh@337 -- # local 'op=<' 00:13:27.210 07:40:52 -- scripts/common.sh@339 -- # ver1_l=2 00:13:27.210 07:40:52 -- scripts/common.sh@340 -- # ver2_l=1 00:13:27.210 07:40:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:27.210 07:40:52 -- scripts/common.sh@343 -- # case "$op" in 00:13:27.210 07:40:52 -- scripts/common.sh@344 -- # : 1 00:13:27.210 07:40:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:27.210 07:40:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.210 07:40:52 -- scripts/common.sh@364 -- # decimal 1 00:13:27.210 07:40:52 -- scripts/common.sh@352 -- # local d=1 00:13:27.210 07:40:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.210 07:40:52 -- scripts/common.sh@354 -- # echo 1 00:13:27.210 07:40:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:27.210 07:40:52 -- scripts/common.sh@365 -- # decimal 2 00:13:27.210 07:40:52 -- scripts/common.sh@352 -- # local d=2 00:13:27.210 07:40:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.210 07:40:52 -- scripts/common.sh@354 -- # echo 2 00:13:27.210 07:40:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:27.210 07:40:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:27.210 07:40:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:27.210 07:40:52 -- scripts/common.sh@367 -- # return 0 00:13:27.210 07:40:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.210 07:40:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:27.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.210 --rc genhtml_branch_coverage=1 00:13:27.210 --rc genhtml_function_coverage=1 00:13:27.210 --rc genhtml_legend=1 00:13:27.210 --rc geninfo_all_blocks=1 00:13:27.210 --rc geninfo_unexecuted_blocks=1 00:13:27.210 00:13:27.210 ' 00:13:27.210 07:40:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:27.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.210 --rc genhtml_branch_coverage=1 00:13:27.210 --rc genhtml_function_coverage=1 00:13:27.211 --rc genhtml_legend=1 00:13:27.211 --rc geninfo_all_blocks=1 00:13:27.211 --rc geninfo_unexecuted_blocks=1 00:13:27.211 00:13:27.211 ' 00:13:27.211 07:40:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:27.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.211 --rc genhtml_branch_coverage=1 00:13:27.211 --rc genhtml_function_coverage=1 00:13:27.211 --rc genhtml_legend=1 00:13:27.211 --rc geninfo_all_blocks=1 00:13:27.211 --rc geninfo_unexecuted_blocks=1 00:13:27.211 00:13:27.211 ' 00:13:27.211 07:40:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:27.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.211 --rc genhtml_branch_coverage=1 00:13:27.211 --rc genhtml_function_coverage=1 00:13:27.211 --rc genhtml_legend=1 00:13:27.211 --rc geninfo_all_blocks=1 00:13:27.211 --rc geninfo_unexecuted_blocks=1 00:13:27.211 00:13:27.211 ' 00:13:27.211 07:40:52 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.211 07:40:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.211 07:40:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.211 07:40:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.211 07:40:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- paths/export.sh@5 -- # export PATH 00:13:27.211 07:40:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.211 07:40:52 -- nvmf/common.sh@7 -- # uname -s 00:13:27.211 07:40:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.211 07:40:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.211 07:40:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.211 07:40:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.211 07:40:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.211 07:40:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.211 07:40:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.211 07:40:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.211 07:40:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.211 07:40:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.211 07:40:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:13:27.211 07:40:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:13:27.211 07:40:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.211 07:40:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.211 07:40:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.211 07:40:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.211 07:40:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.211 07:40:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.211 07:40:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.211 07:40:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- paths/export.sh@5 -- # export PATH 00:13:27.211 07:40:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.211 07:40:52 -- nvmf/common.sh@46 -- # : 0 00:13:27.211 07:40:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:27.211 07:40:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:27.211 07:40:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:27.211 07:40:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.211 07:40:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.211 07:40:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:27.211 07:40:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:27.211 07:40:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:27.211 07:40:52 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:27.211 07:40:52 -- host/fio.sh@14 -- # nvmftestinit 00:13:27.211 07:40:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:27.211 07:40:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.211 07:40:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:27.211 07:40:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:27.211 07:40:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:27.211 07:40:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.211 07:40:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.211 07:40:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.211 07:40:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:27.211 07:40:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:27.211 07:40:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:27.211 07:40:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:27.211 07:40:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:27.211 07:40:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:27.211 07:40:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.211 07:40:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.211 07:40:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:27.211 07:40:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:27.211 07:40:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.211 07:40:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.211 07:40:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.211 07:40:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.211 07:40:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.211 07:40:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.211 07:40:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.211 07:40:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.211 07:40:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:27.211 07:40:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:27.211 Cannot find device "nvmf_tgt_br" 00:13:27.211 07:40:52 -- nvmf/common.sh@154 -- # true 00:13:27.211 07:40:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.211 Cannot find device "nvmf_tgt_br2" 00:13:27.211 07:40:52 -- nvmf/common.sh@155 -- # true 00:13:27.211 07:40:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:27.211 07:40:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:27.211 Cannot find device "nvmf_tgt_br" 00:13:27.211 07:40:52 -- nvmf/common.sh@157 -- # true 00:13:27.211 07:40:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:27.211 Cannot find device "nvmf_tgt_br2" 00:13:27.211 07:40:52 -- nvmf/common.sh@158 -- # true 00:13:27.211 07:40:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:27.211 07:40:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:27.211 07:40:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.211 07:40:52 -- nvmf/common.sh@161 -- # true 00:13:27.211 07:40:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.211 07:40:52 -- nvmf/common.sh@162 -- # true 00:13:27.211 07:40:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.211 07:40:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.212 07:40:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.212 07:40:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.212 07:40:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.212 07:40:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.471 07:40:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.471 07:40:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:27.471 07:40:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:27.471 07:40:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:27.471 07:40:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:27.471 07:40:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:27.471 07:40:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:27.471 07:40:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.471 07:40:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.471 07:40:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.471 07:40:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:27.471 07:40:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:27.471 07:40:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.471 07:40:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.471 07:40:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.471 07:40:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.471 07:40:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.471 07:40:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:27.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:13:27.471 00:13:27.471 --- 10.0.0.2 ping statistics --- 00:13:27.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.471 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:27.471 07:40:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:27.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:13:27.471 00:13:27.471 --- 10.0.0.3 ping statistics --- 00:13:27.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.471 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:27.471 07:40:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:27.471 00:13:27.471 --- 10.0.0.1 ping statistics --- 00:13:27.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.471 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:27.471 07:40:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.472 07:40:52 -- nvmf/common.sh@421 -- # return 0 00:13:27.472 07:40:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:27.472 07:40:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.472 07:40:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:27.472 07:40:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:27.472 07:40:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.472 07:40:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:27.472 07:40:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:27.472 07:40:52 -- host/fio.sh@16 -- # [[ y != y ]] 00:13:27.472 07:40:52 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:13:27.472 07:40:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:27.472 07:40:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.472 07:40:52 -- host/fio.sh@24 -- # nvmfpid=69349 00:13:27.472 07:40:52 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:27.472 07:40:52 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.472 07:40:52 -- host/fio.sh@28 -- # waitforlisten 69349 00:13:27.472 07:40:52 -- common/autotest_common.sh@829 -- # '[' -z 69349 ']' 00:13:27.472 07:40:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.472 07:40:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.472 07:40:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.472 07:40:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.472 07:40:52 -- common/autotest_common.sh@10 -- # set +x 00:13:27.472 [2024-12-02 07:40:53.027912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:27.472 [2024-12-02 07:40:53.027999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.731 [2024-12-02 07:40:53.166611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.731 [2024-12-02 07:40:53.215437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:27.731 [2024-12-02 07:40:53.215553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.731 [2024-12-02 07:40:53.215564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.731 [2024-12-02 07:40:53.215572] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.731 [2024-12-02 07:40:53.215684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.731 [2024-12-02 07:40:53.216544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.731 [2024-12-02 07:40:53.216664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.731 [2024-12-02 07:40:53.216670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.668 07:40:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.668 07:40:53 -- common/autotest_common.sh@862 -- # return 0 00:13:28.668 07:40:53 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:28.668 [2024-12-02 07:40:54.203061] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.668 07:40:54 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:13:28.668 07:40:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.668 07:40:54 -- common/autotest_common.sh@10 -- # set +x 00:13:28.668 07:40:54 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:28.927 Malloc1 00:13:29.186 07:40:54 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:29.186 07:40:54 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.445 07:40:54 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.703 [2024-12-02 07:40:55.186337] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.703 07:40:55 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.961 07:40:55 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:29.961 07:40:55 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:29.961 07:40:55 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:29.961 07:40:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:29.961 07:40:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:29.961 07:40:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:29.961 07:40:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:29.961 07:40:55 -- common/autotest_common.sh@1330 -- # shift 00:13:29.961 07:40:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:29.961 07:40:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:29.961 07:40:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:29.961 07:40:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:29.961 07:40:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:29.961 07:40:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:29.961 07:40:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:29.961 07:40:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:30.220 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:30.220 fio-3.35 00:13:30.220 Starting 1 thread 00:13:32.755 00:13:32.755 test: (groupid=0, jobs=1): err= 0: pid=69431: Mon Dec 2 07:40:57 2024 00:13:32.755 read: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(80.2MiB/2006msec) 00:13:32.755 slat (nsec): min=1809, max=266750, avg=2200.25, stdev=2643.15 00:13:32.755 clat (usec): min=2136, max=11818, avg=6500.93, stdev=479.21 00:13:32.755 lat (usec): min=2166, max=11820, avg=6503.14, stdev=479.08 00:13:32.755 clat percentiles (usec): 00:13:32.755 | 1.00th=[ 5538], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6128], 00:13:32.755 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:13:32.755 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:13:32.755 | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[ 9372], 99.95th=[10159], 00:13:32.755 | 99.99th=[11207] 00:13:32.755 bw ( KiB/s): min=40016, max=41448, per=99.97%, avg=40918.00, stdev=628.58, samples=4 00:13:32.755 iops : min=10004, max=10362, avg=10229.50, stdev=157.14, samples=4 00:13:32.755 write: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(80.2MiB/2006msec); 0 zone resets 00:13:32.755 slat (nsec): min=1889, max=208607, avg=2285.21, stdev=2048.98 00:13:32.755 clat (usec): min=2018, max=10835, avg=5943.49, stdev=433.60 00:13:32.755 lat (usec): min=2030, max=10837, avg=5945.78, stdev=433.52 00:13:32.755 clat percentiles (usec): 00:13:32.755 | 1.00th=[ 5080], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5604], 00:13:32.755 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:13:32.755 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:13:32.755 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 9110], 99.95th=[ 9765], 00:13:32.755 | 99.99th=[10683] 00:13:32.755 bw ( KiB/s): min=40584, max=41344, per=100.00%, avg=40946.00, stdev=337.29, samples=4 00:13:32.755 iops : min=10146, max=10336, avg=10236.50, stdev=84.32, samples=4 00:13:32.755 lat (msec) : 4=0.12%, 10=99.82%, 20=0.06% 00:13:32.755 cpu : usr=67.78%, sys=24.29%, ctx=9, majf=0, minf=5 00:13:32.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:13:32.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.755 issued rwts: total=20527,20532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.755 00:13:32.755 Run status group 0 (all jobs): 00:13:32.755 READ: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=80.2MiB (84.1MB), run=2006-2006msec 00:13:32.755 WRITE: bw=40.0MiB/s (41.9MB/s), 40.0MiB/s-40.0MiB/s (41.9MB/s-41.9MB/s), io=80.2MiB (84.1MB), run=2006-2006msec 00:13:32.755 07:40:57 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:32.755 07:40:57 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:32.755 07:40:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:32.755 07:40:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:32.755 07:40:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:32.755 07:40:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:32.755 07:40:57 -- common/autotest_common.sh@1330 -- # shift 00:13:32.755 07:40:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:32.755 07:40:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:32.755 07:40:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:32.755 07:40:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:32.755 07:40:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:32.755 07:40:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:32.755 07:40:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:32.755 07:40:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:32.755 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:13:32.755 fio-3.35 00:13:32.755 Starting 1 thread 00:13:35.292 00:13:35.292 test: (groupid=0, jobs=1): err= 0: pid=69475: Mon Dec 2 07:41:00 2024 00:13:35.292 read: IOPS=9344, BW=146MiB/s (153MB/s)(293MiB/2008msec) 00:13:35.292 slat (usec): min=2, max=158, avg= 3.55, stdev= 2.44 00:13:35.292 clat (usec): min=1479, max=16014, avg=7570.63, stdev=2402.94 00:13:35.292 lat (usec): min=1482, max=16018, avg=7574.18, stdev=2403.06 00:13:35.292 clat percentiles (usec): 00:13:35.292 | 1.00th=[ 3589], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:13:35.292 | 30.00th=[ 6063], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7767], 00:13:35.292 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[10814], 95.00th=[12387], 00:13:35.292 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15139], 99.95th=[15139], 00:13:35.292 | 99.99th=[15270] 00:13:35.292 bw ( KiB/s): min=72576, max=77472, per=49.73%, avg=74360.00, stdev=2270.93, samples=4 00:13:35.292 iops : min= 4536, max= 4842, avg=4647.50, stdev=141.93, samples=4 00:13:35.292 write: IOPS=5328, BW=83.3MiB/s (87.3MB/s)(152MiB/1824msec); 0 zone resets 00:13:35.292 slat (usec): min=31, max=319, avg=36.13, stdev= 8.77 00:13:35.292 clat (usec): min=3019, max=20956, avg=11002.84, stdev=1970.71 00:13:35.292 lat (usec): min=3051, max=20989, avg=11038.98, stdev=1971.98 00:13:35.292 clat percentiles (usec): 00:13:35.292 | 1.00th=[ 7177], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9372], 00:13:35.292 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:13:35.292 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13566], 95.00th=[14615], 00:13:35.292 | 99.00th=[16450], 99.50th=[16909], 99.90th=[18482], 99.95th=[19268], 00:13:35.292 | 99.99th=[20841] 00:13:35.293 bw ( KiB/s): min=74688, max=79840, per=90.37%, avg=77048.00, stdev=2339.19, samples=4 00:13:35.293 iops : min= 4668, max= 4990, avg=4815.50, stdev=146.20, samples=4 00:13:35.293 lat (msec) : 2=0.03%, 4=1.49%, 10=65.25%, 20=33.22%, 50=0.01% 00:13:35.293 cpu : usr=82.31%, sys=13.05%, ctx=5, majf=0, minf=1 00:13:35.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:35.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:35.293 issued rwts: total=18764,9719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:35.293 00:13:35.293 Run status group 0 (all jobs): 00:13:35.293 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=293MiB (307MB), run=2008-2008msec 00:13:35.293 WRITE: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=152MiB (159MB), run=1824-1824msec 00:13:35.293 07:41:00 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.293 07:41:00 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:13:35.293 07:41:00 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:13:35.293 07:41:00 -- host/fio.sh@51 -- # get_nvme_bdfs 00:13:35.293 07:41:00 -- common/autotest_common.sh@1508 -- # bdfs=() 00:13:35.293 07:41:00 -- common/autotest_common.sh@1508 -- # local bdfs 00:13:35.293 07:41:00 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:35.293 07:41:00 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:35.293 07:41:00 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:13:35.293 07:41:00 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:13:35.293 07:41:00 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:13:35.293 07:41:00 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:13:35.552 Nvme0n1 00:13:35.552 07:41:01 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:13:35.810 07:41:01 -- host/fio.sh@53 -- # ls_guid=ece5430d-e973-4931-8996-f53fb7347e87 00:13:35.810 07:41:01 -- host/fio.sh@54 -- # get_lvs_free_mb ece5430d-e973-4931-8996-f53fb7347e87 00:13:35.810 07:41:01 -- common/autotest_common.sh@1353 -- # local lvs_uuid=ece5430d-e973-4931-8996-f53fb7347e87 00:13:35.810 07:41:01 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:35.810 07:41:01 -- common/autotest_common.sh@1355 -- # local fc 00:13:35.810 07:41:01 -- common/autotest_common.sh@1356 -- # local cs 00:13:35.811 07:41:01 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:36.069 07:41:01 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:36.069 { 00:13:36.069 "uuid": "ece5430d-e973-4931-8996-f53fb7347e87", 00:13:36.069 "name": "lvs_0", 00:13:36.069 "base_bdev": "Nvme0n1", 00:13:36.069 "total_data_clusters": 4, 00:13:36.069 "free_clusters": 4, 00:13:36.069 "block_size": 4096, 00:13:36.069 "cluster_size": 1073741824 00:13:36.069 } 00:13:36.069 ]' 00:13:36.069 07:41:01 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ece5430d-e973-4931-8996-f53fb7347e87") .free_clusters' 00:13:36.069 07:41:01 -- common/autotest_common.sh@1358 -- # fc=4 00:13:36.069 07:41:01 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ece5430d-e973-4931-8996-f53fb7347e87") .cluster_size' 00:13:36.069 4096 00:13:36.069 07:41:01 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:13:36.069 07:41:01 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:13:36.069 07:41:01 -- common/autotest_common.sh@1363 -- # echo 4096 00:13:36.069 07:41:01 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:13:36.329 8f238ab4-129c-4dba-a3cf-92c2a7e7efcd 00:13:36.329 07:41:01 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:13:36.588 07:41:02 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:13:36.846 07:41:02 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:36.846 07:41:02 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:36.846 07:41:02 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:36.846 07:41:02 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:36.846 07:41:02 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:36.846 07:41:02 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:36.846 07:41:02 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:36.846 07:41:02 -- common/autotest_common.sh@1330 -- # shift 00:13:36.846 07:41:02 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:36.846 07:41:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:36.846 07:41:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:36.846 07:41:02 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:36.846 07:41:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:36.846 07:41:02 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:36.846 07:41:02 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:36.846 07:41:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:36.846 07:41:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:36.846 07:41:02 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:13:36.846 07:41:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:37.106 07:41:02 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:37.106 07:41:02 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:37.106 07:41:02 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:37.106 07:41:02 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:37.106 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:37.106 fio-3.35 00:13:37.106 Starting 1 thread 00:13:39.709 00:13:39.709 test: (groupid=0, jobs=1): err= 0: pid=69584: Mon Dec 2 07:41:04 2024 00:13:39.709 read: IOPS=6605, BW=25.8MiB/s (27.1MB/s)(51.8MiB/2008msec) 00:13:39.709 slat (usec): min=2, max=312, avg= 2.80, stdev= 3.97 00:13:39.709 clat (usec): min=2962, max=18167, avg=10119.64, stdev=837.19 00:13:39.709 lat (usec): min=2972, max=18170, avg=10122.45, stdev=836.87 00:13:39.709 clat percentiles (usec): 00:13:39.709 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:13:39.709 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:13:39.709 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:13:39.709 | 99.00th=[11994], 99.50th=[12256], 99.90th=[16319], 99.95th=[17171], 00:13:39.709 | 99.99th=[18220] 00:13:39.709 bw ( KiB/s): min=25692, max=26952, per=99.84%, avg=26379.00, stdev=570.77, samples=4 00:13:39.709 iops : min= 6423, max= 6738, avg=6594.75, stdev=142.69, samples=4 00:13:39.709 write: IOPS=6612, BW=25.8MiB/s (27.1MB/s)(51.9MiB/2008msec); 0 zone resets 00:13:39.709 slat (usec): min=2, max=244, avg= 2.92, stdev= 3.03 00:13:39.709 clat (usec): min=2432, max=17420, avg=9196.35, stdev=793.24 00:13:39.709 lat (usec): min=2446, max=17422, avg=9199.27, stdev=793.14 00:13:39.709 clat percentiles (usec): 00:13:39.709 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8291], 20.00th=[ 8586], 00:13:39.709 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:13:39.709 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:13:39.709 | 99.00th=[10945], 99.50th=[11076], 99.90th=[15139], 99.95th=[16450], 00:13:39.709 | 99.99th=[17433] 00:13:39.709 bw ( KiB/s): min=26304, max=26642, per=99.89%, avg=26420.50, stdev=150.72, samples=4 00:13:39.709 iops : min= 6576, max= 6660, avg=6605.00, stdev=37.43, samples=4 00:13:39.709 lat (msec) : 4=0.06%, 10=65.93%, 20=34.01% 00:13:39.709 cpu : usr=73.54%, sys=20.58%, ctx=3, majf=0, minf=14 00:13:39.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:13:39.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:39.709 issued rwts: total=13263,13277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:39.709 00:13:39.709 Run status group 0 (all jobs): 00:13:39.709 READ: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=51.8MiB (54.3MB), run=2008-2008msec 00:13:39.709 WRITE: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=51.9MiB (54.4MB), run=2008-2008msec 00:13:39.709 07:41:04 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:39.709 07:41:05 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:13:39.967 07:41:05 -- host/fio.sh@64 -- # ls_nested_guid=9aa66517-6c86-4ff9-ad31-467712bf2e3a 00:13:39.967 07:41:05 -- host/fio.sh@65 -- # get_lvs_free_mb 9aa66517-6c86-4ff9-ad31-467712bf2e3a 00:13:39.967 07:41:05 -- common/autotest_common.sh@1353 -- # local lvs_uuid=9aa66517-6c86-4ff9-ad31-467712bf2e3a 00:13:39.967 07:41:05 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:39.967 07:41:05 -- common/autotest_common.sh@1355 -- # local fc 00:13:39.967 07:41:05 -- common/autotest_common.sh@1356 -- # local cs 00:13:39.967 07:41:05 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:40.226 07:41:05 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:40.226 { 00:13:40.226 "uuid": "ece5430d-e973-4931-8996-f53fb7347e87", 00:13:40.226 "name": "lvs_0", 00:13:40.226 "base_bdev": "Nvme0n1", 00:13:40.226 "total_data_clusters": 4, 00:13:40.226 "free_clusters": 0, 00:13:40.226 "block_size": 4096, 00:13:40.226 "cluster_size": 1073741824 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "uuid": "9aa66517-6c86-4ff9-ad31-467712bf2e3a", 00:13:40.226 "name": "lvs_n_0", 00:13:40.226 "base_bdev": "8f238ab4-129c-4dba-a3cf-92c2a7e7efcd", 00:13:40.226 "total_data_clusters": 1022, 00:13:40.226 "free_clusters": 1022, 00:13:40.226 "block_size": 4096, 00:13:40.226 "cluster_size": 4194304 00:13:40.226 } 00:13:40.226 ]' 00:13:40.226 07:41:05 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="9aa66517-6c86-4ff9-ad31-467712bf2e3a") .free_clusters' 00:13:40.226 07:41:05 -- common/autotest_common.sh@1358 -- # fc=1022 00:13:40.226 07:41:05 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="9aa66517-6c86-4ff9-ad31-467712bf2e3a") .cluster_size' 00:13:40.226 4088 00:13:40.226 07:41:05 -- common/autotest_common.sh@1359 -- # cs=4194304 00:13:40.226 07:41:05 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:13:40.226 07:41:05 -- common/autotest_common.sh@1363 -- # echo 4088 00:13:40.226 07:41:05 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:13:40.484 8c0a528b-1f14-4b32-a6be-2d53d2c931b6 00:13:40.484 07:41:05 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:13:40.744 07:41:06 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:13:41.003 07:41:06 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:41.261 07:41:06 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:41.261 07:41:06 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:41.261 07:41:06 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:41.261 07:41:06 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:41.261 07:41:06 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:41.261 07:41:06 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:41.261 07:41:06 -- common/autotest_common.sh@1330 -- # shift 00:13:41.261 07:41:06 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:41.261 07:41:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:41.261 07:41:06 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:41.261 07:41:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:41.261 07:41:06 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:41.261 07:41:06 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:41.261 07:41:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:41.261 07:41:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:41.261 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:41.261 fio-3.35 00:13:41.261 Starting 1 thread 00:13:43.794 00:13:43.794 test: (groupid=0, jobs=1): err= 0: pid=69658: Mon Dec 2 07:41:09 2024 00:13:43.794 read: IOPS=5826, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2010msec) 00:13:43.794 slat (nsec): min=1922, max=318338, avg=2857.05, stdev=4293.75 00:13:43.794 clat (usec): min=3146, max=18093, avg=11488.56, stdev=986.13 00:13:43.794 lat (usec): min=3183, max=18095, avg=11491.42, stdev=985.80 00:13:43.794 clat percentiles (usec): 00:13:43.794 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:13:43.794 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:13:43.794 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:13:43.794 | 99.00th=[13829], 99.50th=[14091], 99.90th=[16581], 99.95th=[17957], 00:13:43.794 | 99.99th=[17957] 00:13:43.794 bw ( KiB/s): min=22216, max=23784, per=99.99%, avg=23302.00, stdev=729.84, samples=4 00:13:43.794 iops : min= 5554, max= 5946, avg=5825.50, stdev=182.46, samples=4 00:13:43.794 write: IOPS=5813, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2010msec); 0 zone resets 00:13:43.794 slat (usec): min=2, max=258, avg= 2.99, stdev= 3.27 00:13:43.794 clat (usec): min=2503, max=19476, avg=10421.55, stdev=955.60 00:13:43.794 lat (usec): min=2516, max=19478, avg=10424.54, stdev=955.45 00:13:43.794 clat percentiles (usec): 00:13:43.794 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:13:43.794 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:13:43.794 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:13:43.794 | 99.00th=[12649], 99.50th=[13042], 99.90th=[17695], 99.95th=[18220], 00:13:43.794 | 99.99th=[19530] 00:13:43.795 bw ( KiB/s): min=23064, max=23408, per=99.92%, avg=23236.00, stdev=173.13, samples=4 00:13:43.795 iops : min= 5766, max= 5852, avg=5809.00, stdev=43.28, samples=4 00:13:43.795 lat (msec) : 4=0.06%, 10=18.10%, 20=81.85% 00:13:43.795 cpu : usr=72.72%, sys=21.35%, ctx=21, majf=0, minf=14 00:13:43.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:13:43.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:43.795 issued rwts: total=11711,11685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:43.795 00:13:43.795 Run status group 0 (all jobs): 00:13:43.795 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2010-2010msec 00:13:43.795 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.9MB), run=2010-2010msec 00:13:43.795 07:41:09 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:43.795 07:41:09 -- host/fio.sh@74 -- # sync 00:13:43.795 07:41:09 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:13:44.054 07:41:09 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:13:44.313 07:41:09 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:13:44.573 07:41:10 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:13:44.832 07:41:10 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:45.399 07:41:10 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:45.399 07:41:10 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:13:45.399 07:41:10 -- host/fio.sh@86 -- # nvmftestfini 00:13:45.399 07:41:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:45.399 07:41:10 -- nvmf/common.sh@116 -- # sync 00:13:45.399 07:41:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.399 07:41:10 -- nvmf/common.sh@119 -- # set +e 00:13:45.399 07:41:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.399 07:41:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.399 rmmod nvme_tcp 00:13:45.399 rmmod nvme_fabrics 00:13:45.399 rmmod nvme_keyring 00:13:45.400 07:41:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.400 07:41:10 -- nvmf/common.sh@123 -- # set -e 00:13:45.400 07:41:10 -- nvmf/common.sh@124 -- # return 0 00:13:45.400 07:41:10 -- nvmf/common.sh@477 -- # '[' -n 69349 ']' 00:13:45.400 07:41:10 -- nvmf/common.sh@478 -- # killprocess 69349 00:13:45.400 07:41:10 -- common/autotest_common.sh@936 -- # '[' -z 69349 ']' 00:13:45.400 07:41:10 -- common/autotest_common.sh@940 -- # kill -0 69349 00:13:45.400 07:41:10 -- common/autotest_common.sh@941 -- # uname 00:13:45.400 07:41:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.400 07:41:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69349 00:13:45.400 killing process with pid 69349 00:13:45.400 07:41:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:45.400 07:41:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:45.400 07:41:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69349' 00:13:45.400 07:41:10 -- common/autotest_common.sh@955 -- # kill 69349 00:13:45.400 07:41:10 -- common/autotest_common.sh@960 -- # wait 69349 00:13:45.659 07:41:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:45.659 07:41:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:45.659 07:41:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:45.659 07:41:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.659 07:41:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:45.659 07:41:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.659 07:41:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.659 07:41:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.659 07:41:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:45.659 ************************************ 00:13:45.659 END TEST nvmf_fio_host 00:13:45.659 ************************************ 00:13:45.659 00:13:45.659 real 0m18.737s 00:13:45.659 user 1m22.237s 00:13:45.659 sys 0m4.269s 00:13:45.659 07:41:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:45.659 07:41:11 -- common/autotest_common.sh@10 -- # set +x 00:13:45.659 07:41:11 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:45.659 07:41:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:45.659 07:41:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.659 07:41:11 -- common/autotest_common.sh@10 -- # set +x 00:13:45.659 ************************************ 00:13:45.659 START TEST nvmf_failover 00:13:45.659 ************************************ 00:13:45.659 07:41:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:45.919 * Looking for test storage... 00:13:45.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:45.919 07:41:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:45.919 07:41:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:45.919 07:41:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:45.919 07:41:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:45.919 07:41:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:45.919 07:41:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:45.919 07:41:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:45.919 07:41:11 -- scripts/common.sh@335 -- # IFS=.-: 00:13:45.919 07:41:11 -- scripts/common.sh@335 -- # read -ra ver1 00:13:45.919 07:41:11 -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.919 07:41:11 -- scripts/common.sh@336 -- # read -ra ver2 00:13:45.919 07:41:11 -- scripts/common.sh@337 -- # local 'op=<' 00:13:45.919 07:41:11 -- scripts/common.sh@339 -- # ver1_l=2 00:13:45.919 07:41:11 -- scripts/common.sh@340 -- # ver2_l=1 00:13:45.919 07:41:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:45.919 07:41:11 -- scripts/common.sh@343 -- # case "$op" in 00:13:45.919 07:41:11 -- scripts/common.sh@344 -- # : 1 00:13:45.919 07:41:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:45.919 07:41:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.919 07:41:11 -- scripts/common.sh@364 -- # decimal 1 00:13:45.919 07:41:11 -- scripts/common.sh@352 -- # local d=1 00:13:45.919 07:41:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.919 07:41:11 -- scripts/common.sh@354 -- # echo 1 00:13:45.919 07:41:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:45.919 07:41:11 -- scripts/common.sh@365 -- # decimal 2 00:13:45.919 07:41:11 -- scripts/common.sh@352 -- # local d=2 00:13:45.919 07:41:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.919 07:41:11 -- scripts/common.sh@354 -- # echo 2 00:13:45.919 07:41:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:45.919 07:41:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:45.919 07:41:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:45.919 07:41:11 -- scripts/common.sh@367 -- # return 0 00:13:45.919 07:41:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.919 07:41:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:45.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.919 --rc genhtml_branch_coverage=1 00:13:45.919 --rc genhtml_function_coverage=1 00:13:45.919 --rc genhtml_legend=1 00:13:45.919 --rc geninfo_all_blocks=1 00:13:45.919 --rc geninfo_unexecuted_blocks=1 00:13:45.919 00:13:45.919 ' 00:13:45.919 07:41:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:45.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.919 --rc genhtml_branch_coverage=1 00:13:45.919 --rc genhtml_function_coverage=1 00:13:45.919 --rc genhtml_legend=1 00:13:45.919 --rc geninfo_all_blocks=1 00:13:45.919 --rc geninfo_unexecuted_blocks=1 00:13:45.919 00:13:45.919 ' 00:13:45.919 07:41:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:45.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.919 --rc genhtml_branch_coverage=1 00:13:45.919 --rc genhtml_function_coverage=1 00:13:45.919 --rc genhtml_legend=1 00:13:45.919 --rc geninfo_all_blocks=1 00:13:45.919 --rc geninfo_unexecuted_blocks=1 00:13:45.919 00:13:45.919 ' 00:13:45.919 07:41:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:45.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.919 --rc genhtml_branch_coverage=1 00:13:45.919 --rc genhtml_function_coverage=1 00:13:45.919 --rc genhtml_legend=1 00:13:45.919 --rc geninfo_all_blocks=1 00:13:45.919 --rc geninfo_unexecuted_blocks=1 00:13:45.919 00:13:45.919 ' 00:13:45.919 07:41:11 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.919 07:41:11 -- nvmf/common.sh@7 -- # uname -s 00:13:45.919 07:41:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.919 07:41:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.919 07:41:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.919 07:41:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.919 07:41:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.919 07:41:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.919 07:41:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.919 07:41:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.919 07:41:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.919 07:41:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.920 07:41:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:13:45.920 07:41:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:13:45.920 07:41:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.920 07:41:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.920 07:41:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.920 07:41:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.920 07:41:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.920 07:41:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.920 07:41:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.920 07:41:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.920 07:41:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.920 07:41:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.920 07:41:11 -- paths/export.sh@5 -- # export PATH 00:13:45.920 07:41:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.920 07:41:11 -- nvmf/common.sh@46 -- # : 0 00:13:45.920 07:41:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:45.920 07:41:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:45.920 07:41:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:45.920 07:41:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.920 07:41:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.920 07:41:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:45.920 07:41:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:45.920 07:41:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:45.920 07:41:11 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.920 07:41:11 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.920 07:41:11 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.920 07:41:11 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:45.920 07:41:11 -- host/failover.sh@18 -- # nvmftestinit 00:13:45.920 07:41:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:45.920 07:41:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.920 07:41:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:45.920 07:41:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:45.920 07:41:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:45.920 07:41:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.920 07:41:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.920 07:41:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.920 07:41:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:45.920 07:41:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:45.920 07:41:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:45.920 07:41:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:45.920 07:41:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:45.920 07:41:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:45.920 07:41:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.920 07:41:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.920 07:41:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.920 07:41:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:45.920 07:41:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.920 07:41:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.920 07:41:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.920 07:41:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.920 07:41:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.920 07:41:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.920 07:41:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.920 07:41:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.920 07:41:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:45.920 07:41:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:45.920 Cannot find device "nvmf_tgt_br" 00:13:45.920 07:41:11 -- nvmf/common.sh@154 -- # true 00:13:45.920 07:41:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.920 Cannot find device "nvmf_tgt_br2" 00:13:45.920 07:41:11 -- nvmf/common.sh@155 -- # true 00:13:45.920 07:41:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:45.920 07:41:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:45.920 Cannot find device "nvmf_tgt_br" 00:13:45.920 07:41:11 -- nvmf/common.sh@157 -- # true 00:13:45.920 07:41:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:45.920 Cannot find device "nvmf_tgt_br2" 00:13:45.920 07:41:11 -- nvmf/common.sh@158 -- # true 00:13:45.920 07:41:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:45.920 07:41:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:46.179 07:41:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.179 07:41:11 -- nvmf/common.sh@161 -- # true 00:13:46.179 07:41:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.179 07:41:11 -- nvmf/common.sh@162 -- # true 00:13:46.179 07:41:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.179 07:41:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.179 07:41:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.179 07:41:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.180 07:41:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.180 07:41:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.180 07:41:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.180 07:41:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:46.180 07:41:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:46.180 07:41:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:46.180 07:41:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:46.180 07:41:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:46.180 07:41:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:46.180 07:41:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.180 07:41:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.180 07:41:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.180 07:41:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:46.180 07:41:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:46.180 07:41:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.180 07:41:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.180 07:41:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.180 07:41:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.180 07:41:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.180 07:41:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:46.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:13:46.180 00:13:46.180 --- 10.0.0.2 ping statistics --- 00:13:46.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.180 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:46.180 07:41:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:46.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:46.180 00:13:46.180 --- 10.0.0.3 ping statistics --- 00:13:46.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.180 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:46.180 07:41:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:46.180 00:13:46.180 --- 10.0.0.1 ping statistics --- 00:13:46.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.180 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:46.180 07:41:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.180 07:41:11 -- nvmf/common.sh@421 -- # return 0 00:13:46.180 07:41:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:46.180 07:41:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.180 07:41:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:46.180 07:41:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:46.180 07:41:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.180 07:41:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:46.180 07:41:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:46.180 07:41:11 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:13:46.180 07:41:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:46.180 07:41:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.180 07:41:11 -- common/autotest_common.sh@10 -- # set +x 00:13:46.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.180 07:41:11 -- nvmf/common.sh@469 -- # nvmfpid=69899 00:13:46.180 07:41:11 -- nvmf/common.sh@470 -- # waitforlisten 69899 00:13:46.180 07:41:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:46.180 07:41:11 -- common/autotest_common.sh@829 -- # '[' -z 69899 ']' 00:13:46.180 07:41:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.180 07:41:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.180 07:41:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.180 07:41:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.180 07:41:11 -- common/autotest_common.sh@10 -- # set +x 00:13:46.180 [2024-12-02 07:41:11.790001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:46.180 [2024-12-02 07:41:11.790245] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.439 [2024-12-02 07:41:11.927593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.439 [2024-12-02 07:41:11.976338] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:46.439 [2024-12-02 07:41:11.976730] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.439 [2024-12-02 07:41:11.976784] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.439 [2024-12-02 07:41:11.976910] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.439 [2024-12-02 07:41:11.977401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.439 [2024-12-02 07:41:11.977492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.439 [2024-12-02 07:41:11.977496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.376 07:41:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.376 07:41:12 -- common/autotest_common.sh@862 -- # return 0 00:13:47.376 07:41:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:47.376 07:41:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.376 07:41:12 -- common/autotest_common.sh@10 -- # set +x 00:13:47.376 07:41:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.376 07:41:12 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:47.635 [2024-12-02 07:41:13.022184] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.635 07:41:13 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:47.635 Malloc0 00:13:47.895 07:41:13 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.895 07:41:13 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.154 07:41:13 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.413 [2024-12-02 07:41:13.925848] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.413 07:41:13 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:48.673 [2024-12-02 07:41:14.137998] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:48.673 07:41:14 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:13:48.932 [2024-12-02 07:41:14.342198] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:13:48.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:48.932 07:41:14 -- host/failover.sh@31 -- # bdevperf_pid=69958 00:13:48.933 07:41:14 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:13:48.933 07:41:14 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:48.933 07:41:14 -- host/failover.sh@34 -- # waitforlisten 69958 /var/tmp/bdevperf.sock 00:13:48.933 07:41:14 -- common/autotest_common.sh@829 -- # '[' -z 69958 ']' 00:13:48.933 07:41:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:48.933 07:41:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.933 07:41:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:48.933 07:41:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.933 07:41:14 -- common/autotest_common.sh@10 -- # set +x 00:13:49.870 07:41:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.870 07:41:15 -- common/autotest_common.sh@862 -- # return 0 00:13:49.870 07:41:15 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:50.127 NVMe0n1 00:13:50.127 07:41:15 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:50.384 00:13:50.384 07:41:15 -- host/failover.sh@39 -- # run_test_pid=69982 00:13:50.384 07:41:15 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:50.384 07:41:15 -- host/failover.sh@41 -- # sleep 1 00:13:51.763 07:41:16 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.763 [2024-12-02 07:41:17.238691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.763 [2024-12-02 07:41:17.239129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.763 [2024-12-02 07:41:17.239161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.763 [2024-12-02 07:41:17.239170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.763 [2024-12-02 07:41:17.239179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 [2024-12-02 07:41:17.239391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8fd00 is same with the state(5) to be set 00:13:51.764 07:41:17 -- host/failover.sh@45 -- # sleep 3 00:13:55.056 07:41:20 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:55.056 00:13:55.056 07:41:20 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:55.316 [2024-12-02 07:41:20.827035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 [2024-12-02 07:41:20.827398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa903c0 is same with the state(5) to be set 00:13:55.316 07:41:20 -- host/failover.sh@50 -- # sleep 3 00:13:58.604 07:41:23 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.604 [2024-12-02 07:41:24.082814] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.604 07:41:24 -- host/failover.sh@55 -- # sleep 1 00:13:59.542 07:41:25 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:13:59.799 [2024-12-02 07:41:25.360947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.360994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 [2024-12-02 07:41:25.361098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8e9f0 is same with the state(5) to be set 00:13:59.799 07:41:25 -- host/failover.sh@59 -- # wait 69982 00:14:06.372 0 00:14:06.372 07:41:31 -- host/failover.sh@61 -- # killprocess 69958 00:14:06.372 07:41:31 -- common/autotest_common.sh@936 -- # '[' -z 69958 ']' 00:14:06.372 07:41:31 -- common/autotest_common.sh@940 -- # kill -0 69958 00:14:06.372 07:41:31 -- common/autotest_common.sh@941 -- # uname 00:14:06.372 07:41:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:06.372 07:41:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69958 00:14:06.372 07:41:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:06.372 07:41:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:06.372 killing process with pid 69958 00:14:06.372 07:41:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69958' 00:14:06.373 07:41:31 -- common/autotest_common.sh@955 -- # kill 69958 00:14:06.373 07:41:31 -- common/autotest_common.sh@960 -- # wait 69958 00:14:06.373 07:41:31 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:06.373 [2024-12-02 07:41:14.412279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:06.373 [2024-12-02 07:41:14.412414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69958 ] 00:14:06.373 [2024-12-02 07:41:14.549999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.373 [2024-12-02 07:41:14.603391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.373 Running I/O for 15 seconds... 00:14:06.373 [2024-12-02 07:41:17.239470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.239974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.239988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.373 [2024-12-02 07:41:17.240281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.373 [2024-12-02 07:41:17.240308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.373 [2024-12-02 07:41:17.240322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.373 [2024-12-02 07:41:17.240334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.240535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.240567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.240693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.240722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.240750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.240779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.240965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.240980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.241008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.241037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.241072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.241100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.241126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.241154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.241180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.241207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.241234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.374 [2024-12-02 07:41:17.241261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.374 [2024-12-02 07:41:17.241288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.374 [2024-12-02 07:41:17.241302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.241948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.241976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.241991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.242004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.242018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.242031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.242045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.242058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.242072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.242129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.375 [2024-12-02 07:41:17.242143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.242166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.242181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.375 [2024-12-02 07:41:17.242195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.375 [2024-12-02 07:41:17.242209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.242237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.242293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.242418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.242463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.242718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.242829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.242857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.242976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.242990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.243003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.243017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.243030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.243044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.243057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.243071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.376 [2024-12-02 07:41:17.243084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.243098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.243111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.376 [2024-12-02 07:41:17.243125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.376 [2024-12-02 07:41:17.243138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:17.243165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:17.243194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:17.243222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:17.243249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:17.243283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:17.243321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e0970 is same with the state(5) to be set 00:14:06.377 [2024-12-02 07:41:17.243354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:06.377 [2024-12-02 07:41:17.243365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:06.377 [2024-12-02 07:41:17.243377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3912 len:8 PRP1 0x0 PRP2 0x0 00:14:06.377 [2024-12-02 07:41:17.243390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243435] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e0970 was disconnected and freed. reset controller. 00:14:06.377 [2024-12-02 07:41:17.243452] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:06.377 [2024-12-02 07:41:17.243502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.377 [2024-12-02 07:41:17.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.377 [2024-12-02 07:41:17.243550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.377 [2024-12-02 07:41:17.243576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.377 [2024-12-02 07:41:17.243602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:17.243615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:06.377 [2024-12-02 07:41:17.245827] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:06.377 [2024-12-02 07:41:17.245863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177d690 (9): Bad file descriptor 00:14:06.377 [2024-12-02 07:41:17.270777] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:06.377 [2024-12-02 07:41:20.827464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.377 [2024-12-02 07:41:20.827897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.377 [2024-12-02 07:41:20.827924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.377 [2024-12-02 07:41:20.827950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.377 [2024-12-02 07:41:20.827977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.377 [2024-12-02 07:41:20.827991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.378 [2024-12-02 07:41:20.828066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.378 [2024-12-02 07:41:20.828121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.378 [2024-12-02 07:41:20.828415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.378 [2024-12-02 07:41:20.828444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.378 [2024-12-02 07:41:20.828500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.378 [2024-12-02 07:41:20.828528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.378 [2024-12-02 07:41:20.828589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.378 [2024-12-02 07:41:20.828621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.378 [2024-12-02 07:41:20.828636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.828650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.828694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.828723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.828752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.828781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.828810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.828848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.828877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.828920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.828949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.828978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.828991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.379 [2024-12-02 07:41:20.829548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.379 [2024-12-02 07:41:20.829640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.379 [2024-12-02 07:41:20.829655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.829682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.829710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.829977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.829995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.830050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.830134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.830162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.830248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.830430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.830458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.380 [2024-12-02 07:41:20.830500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.380 [2024-12-02 07:41:20.830624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.380 [2024-12-02 07:41:20.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.830817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.830896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.830949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.830977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.830991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.831003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.831055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.831141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.831169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.831195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.381 [2024-12-02 07:41:20.831221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.381 [2024-12-02 07:41:20.831460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.381 [2024-12-02 07:41:20.831474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5450 is same with the state(5) to be set 00:14:06.381 [2024-12-02 07:41:20.831490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:06.381 [2024-12-02 07:41:20.831517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:06.381 [2024-12-02 07:41:20.831535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36400 len:8 PRP1 0x0 PRP2 0x0 00:14:06.381 [2024-12-02 07:41:20.831549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:20.831595] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c5450 was disconnected and freed. reset controller. 00:14:06.382 [2024-12-02 07:41:20.831612] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:06.382 [2024-12-02 07:41:20.831664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.382 [2024-12-02 07:41:20.831685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:20.831714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.382 [2024-12-02 07:41:20.831727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:20.831756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.382 [2024-12-02 07:41:20.831769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:20.831783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.382 [2024-12-02 07:41:20.831795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:20.831808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:06.382 [2024-12-02 07:41:20.831853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177d690 (9): Bad file descriptor 00:14:06.382 [2024-12-02 07:41:20.834020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:06.382 [2024-12-02 07:41:20.863784] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:06.382 [2024-12-02 07:41:25.361159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.382 [2024-12-02 07:41:25.361658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.382 [2024-12-02 07:41:25.361688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.382 [2024-12-02 07:41:25.361786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.382 [2024-12-02 07:41:25.361925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.382 [2024-12-02 07:41:25.361940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.382 [2024-12-02 07:41:25.361953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.361968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.361982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.361996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.383 [2024-12-02 07:41:25.362264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.383 [2024-12-02 07:41:25.362370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.383 [2024-12-02 07:41:25.362503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.383 [2024-12-02 07:41:25.362559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.383 [2024-12-02 07:41:25.362623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.383 [2024-12-02 07:41:25.362793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.383 [2024-12-02 07:41:25.362933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.383 [2024-12-02 07:41:25.362947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.362967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.362983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.362997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.384 [2024-12-02 07:41:25.363721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.384 [2024-12-02 07:41:25.363759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.384 [2024-12-02 07:41:25.363775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.363789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.363804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.363818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.363833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.363847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.363862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.363891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.363906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.363919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.363934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.363947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.363975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.363990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.364031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.364430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.364484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.364521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.364548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.385 [2024-12-02 07:41:25.364689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.385 [2024-12-02 07:41:25.364704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.385 [2024-12-02 07:41:25.364717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.364744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.364771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.364799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.386 [2024-12-02 07:41:25.364827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:06.386 [2024-12-02 07:41:25.364861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.364889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.364916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.364944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.364974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.364987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.365015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.365043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.365072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.386 [2024-12-02 07:41:25.365100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1775cc0 is same with the state(5) to be set 00:14:06.386 [2024-12-02 07:41:25.365130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:06.386 [2024-12-02 07:41:25.365141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:06.386 [2024-12-02 07:41:25.365154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53088 len:8 PRP1 0x0 PRP2 0x0 00:14:06.386 [2024-12-02 07:41:25.365167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365213] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1775cc0 was disconnected and freed. reset controller. 00:14:06.386 [2024-12-02 07:41:25.365231] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:06.386 [2024-12-02 07:41:25.365282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.386 [2024-12-02 07:41:25.365303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.386 [2024-12-02 07:41:25.365359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.386 [2024-12-02 07:41:25.365385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.386 [2024-12-02 07:41:25.365412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.386 [2024-12-02 07:41:25.365425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:06.386 [2024-12-02 07:41:25.365470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177d690 (9): Bad file descriptor 00:14:06.386 [2024-12-02 07:41:25.367936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:06.386 [2024-12-02 07:41:25.395325] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:06.386 00:14:06.386 Latency(us) 00:14:06.386 [2024-12-02T07:41:32.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.386 [2024-12-02T07:41:32.010Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:06.386 Verification LBA range: start 0x0 length 0x4000 00:14:06.386 NVMe0n1 : 15.01 14592.01 57.00 291.95 0.00 8583.50 409.60 14834.97 00:14:06.386 [2024-12-02T07:41:32.010Z] =================================================================================================================== 00:14:06.386 [2024-12-02T07:41:32.010Z] Total : 14592.01 57.00 291.95 0.00 8583.50 409.60 14834.97 00:14:06.386 Received shutdown signal, test time was about 15.000000 seconds 00:14:06.386 00:14:06.386 Latency(us) 00:14:06.386 [2024-12-02T07:41:32.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.386 [2024-12-02T07:41:32.010Z] =================================================================================================================== 00:14:06.386 [2024-12-02T07:41:32.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.386 07:41:31 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:06.386 07:41:31 -- host/failover.sh@65 -- # count=3 00:14:06.386 07:41:31 -- host/failover.sh@67 -- # (( count != 3 )) 00:14:06.386 07:41:31 -- host/failover.sh@73 -- # bdevperf_pid=70161 00:14:06.386 07:41:31 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:06.386 07:41:31 -- host/failover.sh@75 -- # waitforlisten 70161 /var/tmp/bdevperf.sock 00:14:06.386 07:41:31 -- common/autotest_common.sh@829 -- # '[' -z 70161 ']' 00:14:06.386 07:41:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.386 07:41:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.386 07:41:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.386 07:41:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.386 07:41:31 -- common/autotest_common.sh@10 -- # set +x 00:14:06.951 07:41:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.951 07:41:32 -- common/autotest_common.sh@862 -- # return 0 00:14:06.951 07:41:32 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:07.209 [2024-12-02 07:41:32.637580] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:07.209 07:41:32 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:07.467 [2024-12-02 07:41:32.905855] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:07.467 07:41:32 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:07.724 NVMe0n1 00:14:07.724 07:41:33 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:07.983 00:14:07.983 07:41:33 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:08.241 00:14:08.241 07:41:33 -- host/failover.sh@82 -- # grep -q NVMe0 00:14:08.241 07:41:33 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:08.498 07:41:34 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:08.756 07:41:34 -- host/failover.sh@87 -- # sleep 3 00:14:12.037 07:41:37 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:12.037 07:41:37 -- host/failover.sh@88 -- # grep -q NVMe0 00:14:12.037 07:41:37 -- host/failover.sh@90 -- # run_test_pid=70238 00:14:12.037 07:41:37 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:12.037 07:41:37 -- host/failover.sh@92 -- # wait 70238 00:14:13.413 0 00:14:13.413 07:41:38 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:13.413 [2024-12-02 07:41:31.399934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.413 [2024-12-02 07:41:31.400040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70161 ] 00:14:13.413 [2024-12-02 07:41:31.537630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.413 [2024-12-02 07:41:31.589842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.413 [2024-12-02 07:41:34.269060] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:13.413 [2024-12-02 07:41:34.269184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.413 [2024-12-02 07:41:34.269209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.413 [2024-12-02 07:41:34.269224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.413 [2024-12-02 07:41:34.269237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.413 [2024-12-02 07:41:34.269249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.413 [2024-12-02 07:41:34.269261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.413 [2024-12-02 07:41:34.269274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.413 [2024-12-02 07:41:34.269286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.413 [2024-12-02 07:41:34.269298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:13.413 [2024-12-02 07:41:34.269353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:13.413 [2024-12-02 07:41:34.269401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e3690 (9): Bad file descriptor 00:14:13.413 [2024-12-02 07:41:34.273056] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:13.413 Running I/O for 1 seconds... 00:14:13.413 00:14:13.413 Latency(us) 00:14:13.413 [2024-12-02T07:41:39.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.413 [2024-12-02T07:41:39.037Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:13.413 Verification LBA range: start 0x0 length 0x4000 00:14:13.413 NVMe0n1 : 1.01 14613.96 57.09 0.00 0.00 8720.54 815.48 10724.07 00:14:13.413 [2024-12-02T07:41:39.037Z] =================================================================================================================== 00:14:13.413 [2024-12-02T07:41:39.037Z] Total : 14613.96 57.09 0.00 0.00 8720.54 815.48 10724.07 00:14:13.413 07:41:38 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:13.413 07:41:38 -- host/failover.sh@95 -- # grep -q NVMe0 00:14:13.413 07:41:38 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:13.672 07:41:39 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:13.672 07:41:39 -- host/failover.sh@99 -- # grep -q NVMe0 00:14:13.941 07:41:39 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:14.224 07:41:39 -- host/failover.sh@101 -- # sleep 3 00:14:17.514 07:41:42 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:17.514 07:41:42 -- host/failover.sh@103 -- # grep -q NVMe0 00:14:17.514 07:41:42 -- host/failover.sh@108 -- # killprocess 70161 00:14:17.514 07:41:42 -- common/autotest_common.sh@936 -- # '[' -z 70161 ']' 00:14:17.514 07:41:42 -- common/autotest_common.sh@940 -- # kill -0 70161 00:14:17.514 07:41:42 -- common/autotest_common.sh@941 -- # uname 00:14:17.514 07:41:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:17.514 07:41:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70161 00:14:17.514 07:41:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:17.514 07:41:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:17.514 killing process with pid 70161 00:14:17.514 07:41:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70161' 00:14:17.514 07:41:42 -- common/autotest_common.sh@955 -- # kill 70161 00:14:17.514 07:41:42 -- common/autotest_common.sh@960 -- # wait 70161 00:14:17.514 07:41:43 -- host/failover.sh@110 -- # sync 00:14:17.773 07:41:43 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.773 07:41:43 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:17.773 07:41:43 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:17.773 07:41:43 -- host/failover.sh@116 -- # nvmftestfini 00:14:17.773 07:41:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:17.773 07:41:43 -- nvmf/common.sh@116 -- # sync 00:14:17.774 07:41:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:17.774 07:41:43 -- nvmf/common.sh@119 -- # set +e 00:14:17.774 07:41:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:17.774 07:41:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:17.774 rmmod nvme_tcp 00:14:17.774 rmmod nvme_fabrics 00:14:18.033 rmmod nvme_keyring 00:14:18.033 07:41:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:18.033 07:41:43 -- nvmf/common.sh@123 -- # set -e 00:14:18.033 07:41:43 -- nvmf/common.sh@124 -- # return 0 00:14:18.033 07:41:43 -- nvmf/common.sh@477 -- # '[' -n 69899 ']' 00:14:18.033 07:41:43 -- nvmf/common.sh@478 -- # killprocess 69899 00:14:18.033 07:41:43 -- common/autotest_common.sh@936 -- # '[' -z 69899 ']' 00:14:18.033 07:41:43 -- common/autotest_common.sh@940 -- # kill -0 69899 00:14:18.033 07:41:43 -- common/autotest_common.sh@941 -- # uname 00:14:18.033 07:41:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.033 07:41:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69899 00:14:18.033 07:41:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:18.033 07:41:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:18.033 07:41:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69899' 00:14:18.033 killing process with pid 69899 00:14:18.033 07:41:43 -- common/autotest_common.sh@955 -- # kill 69899 00:14:18.033 07:41:43 -- common/autotest_common.sh@960 -- # wait 69899 00:14:18.292 07:41:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:18.292 07:41:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:18.292 07:41:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:18.292 07:41:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.292 07:41:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:18.292 07:41:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.292 07:41:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.292 07:41:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.292 07:41:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:18.292 00:14:18.292 real 0m32.469s 00:14:18.292 user 2m6.243s 00:14:18.292 sys 0m5.156s 00:14:18.292 07:41:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:18.292 07:41:43 -- common/autotest_common.sh@10 -- # set +x 00:14:18.292 ************************************ 00:14:18.292 END TEST nvmf_failover 00:14:18.292 ************************************ 00:14:18.292 07:41:43 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:18.292 07:41:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:18.292 07:41:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.292 07:41:43 -- common/autotest_common.sh@10 -- # set +x 00:14:18.292 ************************************ 00:14:18.292 START TEST nvmf_discovery 00:14:18.292 ************************************ 00:14:18.292 07:41:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:18.292 * Looking for test storage... 00:14:18.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:18.292 07:41:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:18.293 07:41:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:18.293 07:41:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:18.293 07:41:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:18.293 07:41:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:18.293 07:41:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:18.293 07:41:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:18.293 07:41:43 -- scripts/common.sh@335 -- # IFS=.-: 00:14:18.293 07:41:43 -- scripts/common.sh@335 -- # read -ra ver1 00:14:18.293 07:41:43 -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.293 07:41:43 -- scripts/common.sh@336 -- # read -ra ver2 00:14:18.293 07:41:43 -- scripts/common.sh@337 -- # local 'op=<' 00:14:18.293 07:41:43 -- scripts/common.sh@339 -- # ver1_l=2 00:14:18.293 07:41:43 -- scripts/common.sh@340 -- # ver2_l=1 00:14:18.293 07:41:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:18.293 07:41:43 -- scripts/common.sh@343 -- # case "$op" in 00:14:18.293 07:41:43 -- scripts/common.sh@344 -- # : 1 00:14:18.293 07:41:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:18.293 07:41:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.293 07:41:43 -- scripts/common.sh@364 -- # decimal 1 00:14:18.552 07:41:43 -- scripts/common.sh@352 -- # local d=1 00:14:18.552 07:41:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.552 07:41:43 -- scripts/common.sh@354 -- # echo 1 00:14:18.552 07:41:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:18.552 07:41:43 -- scripts/common.sh@365 -- # decimal 2 00:14:18.552 07:41:43 -- scripts/common.sh@352 -- # local d=2 00:14:18.552 07:41:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.552 07:41:43 -- scripts/common.sh@354 -- # echo 2 00:14:18.552 07:41:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:18.552 07:41:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:18.552 07:41:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:18.552 07:41:43 -- scripts/common.sh@367 -- # return 0 00:14:18.552 07:41:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.552 07:41:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:18.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.552 --rc genhtml_branch_coverage=1 00:14:18.552 --rc genhtml_function_coverage=1 00:14:18.552 --rc genhtml_legend=1 00:14:18.552 --rc geninfo_all_blocks=1 00:14:18.552 --rc geninfo_unexecuted_blocks=1 00:14:18.552 00:14:18.552 ' 00:14:18.552 07:41:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:18.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.552 --rc genhtml_branch_coverage=1 00:14:18.552 --rc genhtml_function_coverage=1 00:14:18.552 --rc genhtml_legend=1 00:14:18.552 --rc geninfo_all_blocks=1 00:14:18.552 --rc geninfo_unexecuted_blocks=1 00:14:18.552 00:14:18.552 ' 00:14:18.552 07:41:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:18.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.552 --rc genhtml_branch_coverage=1 00:14:18.552 --rc genhtml_function_coverage=1 00:14:18.552 --rc genhtml_legend=1 00:14:18.552 --rc geninfo_all_blocks=1 00:14:18.552 --rc geninfo_unexecuted_blocks=1 00:14:18.552 00:14:18.552 ' 00:14:18.552 07:41:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:18.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.552 --rc genhtml_branch_coverage=1 00:14:18.552 --rc genhtml_function_coverage=1 00:14:18.552 --rc genhtml_legend=1 00:14:18.552 --rc geninfo_all_blocks=1 00:14:18.552 --rc geninfo_unexecuted_blocks=1 00:14:18.552 00:14:18.552 ' 00:14:18.552 07:41:43 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:18.552 07:41:43 -- nvmf/common.sh@7 -- # uname -s 00:14:18.552 07:41:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.552 07:41:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.552 07:41:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.552 07:41:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.552 07:41:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.552 07:41:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.552 07:41:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.552 07:41:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.552 07:41:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.552 07:41:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.552 07:41:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:14:18.552 07:41:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:14:18.552 07:41:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.552 07:41:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.552 07:41:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:18.552 07:41:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.552 07:41:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.552 07:41:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.552 07:41:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.552 07:41:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.552 07:41:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.552 07:41:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.552 07:41:43 -- paths/export.sh@5 -- # export PATH 00:14:18.553 07:41:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.553 07:41:43 -- nvmf/common.sh@46 -- # : 0 00:14:18.553 07:41:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:18.553 07:41:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:18.553 07:41:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:18.553 07:41:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.553 07:41:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.553 07:41:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:18.553 07:41:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:18.553 07:41:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:18.553 07:41:43 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:18.553 07:41:43 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:18.553 07:41:43 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:18.553 07:41:43 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:18.553 07:41:43 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:18.553 07:41:43 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:18.553 07:41:43 -- host/discovery.sh@25 -- # nvmftestinit 00:14:18.553 07:41:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:18.553 07:41:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.553 07:41:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:18.553 07:41:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:18.553 07:41:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:18.553 07:41:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.553 07:41:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.553 07:41:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.553 07:41:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:18.553 07:41:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:18.553 07:41:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:18.553 07:41:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:18.553 07:41:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:18.553 07:41:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:18.553 07:41:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.553 07:41:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.553 07:41:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:18.553 07:41:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:18.553 07:41:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:18.553 07:41:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:18.553 07:41:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:18.553 07:41:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.553 07:41:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:18.553 07:41:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:18.553 07:41:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:18.553 07:41:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:18.553 07:41:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:18.553 07:41:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:18.553 Cannot find device "nvmf_tgt_br" 00:14:18.553 07:41:43 -- nvmf/common.sh@154 -- # true 00:14:18.553 07:41:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.553 Cannot find device "nvmf_tgt_br2" 00:14:18.553 07:41:43 -- nvmf/common.sh@155 -- # true 00:14:18.553 07:41:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:18.553 07:41:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:18.553 Cannot find device "nvmf_tgt_br" 00:14:18.553 07:41:44 -- nvmf/common.sh@157 -- # true 00:14:18.553 07:41:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:18.553 Cannot find device "nvmf_tgt_br2" 00:14:18.553 07:41:44 -- nvmf/common.sh@158 -- # true 00:14:18.553 07:41:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:18.553 07:41:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:18.553 07:41:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.553 07:41:44 -- nvmf/common.sh@161 -- # true 00:14:18.553 07:41:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.553 07:41:44 -- nvmf/common.sh@162 -- # true 00:14:18.553 07:41:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.553 07:41:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.553 07:41:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.553 07:41:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.553 07:41:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.553 07:41:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.553 07:41:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.553 07:41:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:18.553 07:41:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:18.553 07:41:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:18.553 07:41:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:18.553 07:41:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:18.553 07:41:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:18.553 07:41:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.553 07:41:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.553 07:41:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.812 07:41:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:18.812 07:41:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:18.812 07:41:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.812 07:41:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.812 07:41:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.812 07:41:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.812 07:41:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.812 07:41:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:18.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:18.812 00:14:18.812 --- 10.0.0.2 ping statistics --- 00:14:18.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.812 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:18.812 07:41:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:18.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:14:18.812 00:14:18.812 --- 10.0.0.3 ping statistics --- 00:14:18.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.812 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:18.812 07:41:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:18.812 00:14:18.812 --- 10.0.0.1 ping statistics --- 00:14:18.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.813 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:18.813 07:41:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.813 07:41:44 -- nvmf/common.sh@421 -- # return 0 00:14:18.813 07:41:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:18.813 07:41:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.813 07:41:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:18.813 07:41:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:18.813 07:41:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.813 07:41:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:18.813 07:41:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:18.813 07:41:44 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:18.813 07:41:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:18.813 07:41:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.813 07:41:44 -- common/autotest_common.sh@10 -- # set +x 00:14:18.813 07:41:44 -- nvmf/common.sh@469 -- # nvmfpid=70507 00:14:18.813 07:41:44 -- nvmf/common.sh@470 -- # waitforlisten 70507 00:14:18.813 07:41:44 -- common/autotest_common.sh@829 -- # '[' -z 70507 ']' 00:14:18.813 07:41:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.813 07:41:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.813 07:41:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.813 07:41:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.813 07:41:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.813 07:41:44 -- common/autotest_common.sh@10 -- # set +x 00:14:18.813 [2024-12-02 07:41:44.305711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:18.813 [2024-12-02 07:41:44.305787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.072 [2024-12-02 07:41:44.439374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.072 [2024-12-02 07:41:44.492487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.072 [2024-12-02 07:41:44.492612] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.072 [2024-12-02 07:41:44.492626] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.072 [2024-12-02 07:41:44.492647] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.072 [2024-12-02 07:41:44.492726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.019 07:41:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.019 07:41:45 -- common/autotest_common.sh@862 -- # return 0 00:14:20.019 07:41:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.019 07:41:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.019 07:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 07:41:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.019 07:41:45 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.019 07:41:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 07:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 [2024-12-02 07:41:45.358771] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.019 07:41:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 07:41:45 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:14:20.019 07:41:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 07:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 [2024-12-02 07:41:45.366876] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:20.019 07:41:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 07:41:45 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:20.019 07:41:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 07:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 null0 00:14:20.019 07:41:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 07:41:45 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:20.019 07:41:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 07:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 null1 00:14:20.019 07:41:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 07:41:45 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:20.019 07:41:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.019 07:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:20.019 07:41:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.019 07:41:45 -- host/discovery.sh@45 -- # hostpid=70541 00:14:20.019 07:41:45 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:20.019 07:41:45 -- host/discovery.sh@46 -- # waitforlisten 70541 /tmp/host.sock 00:14:20.019 07:41:45 -- common/autotest_common.sh@829 -- # '[' -z 70541 ']' 00:14:20.019 07:41:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:14:20.019 07:41:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.019 07:41:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:20.019 07:41:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.019 07:41:45 -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 [2024-12-02 07:41:45.452484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:20.019 [2024-12-02 07:41:45.452780] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70541 ] 00:14:20.019 [2024-12-02 07:41:45.593449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.278 [2024-12-02 07:41:45.661053] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:20.278 [2024-12-02 07:41:45.661492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.846 07:41:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.846 07:41:46 -- common/autotest_common.sh@862 -- # return 0 00:14:20.846 07:41:46 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:20.846 07:41:46 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:20.846 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 07:41:46 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:20.846 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 07:41:46 -- host/discovery.sh@72 -- # notify_id=0 00:14:20.846 07:41:46 -- host/discovery.sh@78 -- # get_subsystem_names 00:14:20.846 07:41:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:20.846 07:41:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:20.846 07:41:46 -- host/discovery.sh@59 -- # sort 00:14:20.846 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 07:41:46 -- host/discovery.sh@59 -- # xargs 00:14:20.846 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.846 07:41:46 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:14:20.846 07:41:46 -- host/discovery.sh@79 -- # get_bdev_list 00:14:20.846 07:41:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:20.846 07:41:46 -- host/discovery.sh@55 -- # sort 00:14:20.846 07:41:46 -- host/discovery.sh@55 -- # xargs 00:14:20.846 07:41:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:20.846 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.846 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:20.846 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.105 07:41:46 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:14:21.105 07:41:46 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:21.105 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.105 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.105 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.105 07:41:46 -- host/discovery.sh@82 -- # get_subsystem_names 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:21.105 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.105 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # xargs 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # sort 00:14:21.105 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.105 07:41:46 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:14:21.105 07:41:46 -- host/discovery.sh@83 -- # get_bdev_list 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # sort 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:21.105 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.105 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # xargs 00:14:21.105 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.105 07:41:46 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:21.105 07:41:46 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:21.105 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.105 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.105 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.105 07:41:46 -- host/discovery.sh@86 -- # get_subsystem_names 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # sort 00:14:21.105 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.105 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # xargs 00:14:21.105 07:41:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:21.105 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.105 07:41:46 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:14:21.105 07:41:46 -- host/discovery.sh@87 -- # get_bdev_list 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:21.105 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # sort 00:14:21.105 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.105 07:41:46 -- host/discovery.sh@55 -- # xargs 00:14:21.105 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.364 07:41:46 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:21.364 07:41:46 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:21.364 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.364 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.364 [2024-12-02 07:41:46.743205] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.364 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.364 07:41:46 -- host/discovery.sh@92 -- # get_subsystem_names 00:14:21.364 07:41:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:21.364 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.364 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.364 07:41:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:21.364 07:41:46 -- host/discovery.sh@59 -- # xargs 00:14:21.364 07:41:46 -- host/discovery.sh@59 -- # sort 00:14:21.364 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.364 07:41:46 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:21.364 07:41:46 -- host/discovery.sh@93 -- # get_bdev_list 00:14:21.364 07:41:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:21.364 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.364 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.364 07:41:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:21.364 07:41:46 -- host/discovery.sh@55 -- # xargs 00:14:21.364 07:41:46 -- host/discovery.sh@55 -- # sort 00:14:21.364 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.364 07:41:46 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:14:21.364 07:41:46 -- host/discovery.sh@94 -- # get_notification_count 00:14:21.364 07:41:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:21.364 07:41:46 -- host/discovery.sh@74 -- # jq '. | length' 00:14:21.364 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.364 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.364 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.364 07:41:46 -- host/discovery.sh@74 -- # notification_count=0 00:14:21.364 07:41:46 -- host/discovery.sh@75 -- # notify_id=0 00:14:21.364 07:41:46 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:14:21.364 07:41:46 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:21.364 07:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.364 07:41:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.364 07:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.364 07:41:46 -- host/discovery.sh@100 -- # sleep 1 00:14:21.930 [2024-12-02 07:41:47.401595] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:21.930 [2024-12-02 07:41:47.401640] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:21.930 [2024-12-02 07:41:47.401657] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:21.930 [2024-12-02 07:41:47.407633] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:21.930 [2024-12-02 07:41:47.463034] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:21.930 [2024-12-02 07:41:47.463060] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:22.497 07:41:47 -- host/discovery.sh@101 -- # get_subsystem_names 00:14:22.497 07:41:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:22.497 07:41:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:22.497 07:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.497 07:41:47 -- common/autotest_common.sh@10 -- # set +x 00:14:22.497 07:41:47 -- host/discovery.sh@59 -- # sort 00:14:22.497 07:41:47 -- host/discovery.sh@59 -- # xargs 00:14:22.497 07:41:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.497 07:41:47 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.497 07:41:47 -- host/discovery.sh@102 -- # get_bdev_list 00:14:22.497 07:41:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:22.497 07:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.497 07:41:47 -- common/autotest_common.sh@10 -- # set +x 00:14:22.497 07:41:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:22.497 07:41:47 -- host/discovery.sh@55 -- # xargs 00:14:22.497 07:41:47 -- host/discovery.sh@55 -- # sort 00:14:22.497 07:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.497 07:41:48 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:22.497 07:41:48 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:14:22.497 07:41:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:22.497 07:41:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:22.497 07:41:48 -- host/discovery.sh@63 -- # sort -n 00:14:22.497 07:41:48 -- host/discovery.sh@63 -- # xargs 00:14:22.497 07:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.497 07:41:48 -- common/autotest_common.sh@10 -- # set +x 00:14:22.497 07:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.497 07:41:48 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:14:22.497 07:41:48 -- host/discovery.sh@104 -- # get_notification_count 00:14:22.497 07:41:48 -- host/discovery.sh@74 -- # jq '. | length' 00:14:22.497 07:41:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:22.497 07:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.497 07:41:48 -- common/autotest_common.sh@10 -- # set +x 00:14:22.497 07:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.756 07:41:48 -- host/discovery.sh@74 -- # notification_count=1 00:14:22.756 07:41:48 -- host/discovery.sh@75 -- # notify_id=1 00:14:22.756 07:41:48 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:14:22.756 07:41:48 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:22.756 07:41:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.756 07:41:48 -- common/autotest_common.sh@10 -- # set +x 00:14:22.756 07:41:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.756 07:41:48 -- host/discovery.sh@109 -- # sleep 1 00:14:23.692 07:41:49 -- host/discovery.sh@110 -- # get_bdev_list 00:14:23.692 07:41:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:23.692 07:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.692 07:41:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:23.692 07:41:49 -- common/autotest_common.sh@10 -- # set +x 00:14:23.692 07:41:49 -- host/discovery.sh@55 -- # sort 00:14:23.692 07:41:49 -- host/discovery.sh@55 -- # xargs 00:14:23.692 07:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.692 07:41:49 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:23.692 07:41:49 -- host/discovery.sh@111 -- # get_notification_count 00:14:23.692 07:41:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:23.692 07:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.692 07:41:49 -- common/autotest_common.sh@10 -- # set +x 00:14:23.692 07:41:49 -- host/discovery.sh@74 -- # jq '. | length' 00:14:23.692 07:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.692 07:41:49 -- host/discovery.sh@74 -- # notification_count=1 00:14:23.692 07:41:49 -- host/discovery.sh@75 -- # notify_id=2 00:14:23.692 07:41:49 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:14:23.692 07:41:49 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:14:23.692 07:41:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.692 07:41:49 -- common/autotest_common.sh@10 -- # set +x 00:14:23.692 [2024-12-02 07:41:49.269755] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:23.692 [2024-12-02 07:41:49.270872] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:23.692 [2024-12-02 07:41:49.270907] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:23.692 07:41:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.692 07:41:49 -- host/discovery.sh@117 -- # sleep 1 00:14:23.692 [2024-12-02 07:41:49.276877] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:14:23.951 [2024-12-02 07:41:49.335126] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:23.951 [2024-12-02 07:41:49.335150] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:23.951 [2024-12-02 07:41:49.335172] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:24.887 07:41:50 -- host/discovery.sh@118 -- # get_subsystem_names 00:14:24.887 07:41:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:24.887 07:41:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:24.887 07:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.887 07:41:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.887 07:41:50 -- host/discovery.sh@59 -- # sort 00:14:24.887 07:41:50 -- host/discovery.sh@59 -- # xargs 00:14:24.887 07:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.887 07:41:50 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.887 07:41:50 -- host/discovery.sh@119 -- # get_bdev_list 00:14:24.887 07:41:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:24.887 07:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.887 07:41:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.887 07:41:50 -- host/discovery.sh@55 -- # sort 00:14:24.887 07:41:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:24.887 07:41:50 -- host/discovery.sh@55 -- # xargs 00:14:24.887 07:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.887 07:41:50 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:24.887 07:41:50 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:14:24.887 07:41:50 -- host/discovery.sh@63 -- # sort -n 00:14:24.887 07:41:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:24.887 07:41:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:24.887 07:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.887 07:41:50 -- host/discovery.sh@63 -- # xargs 00:14:24.887 07:41:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.887 07:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.887 07:41:50 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:24.887 07:41:50 -- host/discovery.sh@121 -- # get_notification_count 00:14:24.887 07:41:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:24.887 07:41:50 -- host/discovery.sh@74 -- # jq '. | length' 00:14:24.887 07:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.887 07:41:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.888 07:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.888 07:41:50 -- host/discovery.sh@74 -- # notification_count=0 00:14:24.888 07:41:50 -- host/discovery.sh@75 -- # notify_id=2 00:14:24.888 07:41:50 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:14:24.888 07:41:50 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:24.888 07:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.888 07:41:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.888 [2024-12-02 07:41:50.500073] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:24.888 [2024-12-02 07:41:50.500125] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:24.888 [2024-12-02 07:41:50.501613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.888 [2024-12-02 07:41:50.501681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.888 [2024-12-02 07:41:50.501710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.888 [2024-12-02 07:41:50.501718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.888 [2024-12-02 07:41:50.501727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.888 [2024-12-02 07:41:50.501735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.888 [2024-12-02 07:41:50.501744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.888 [2024-12-02 07:41:50.501751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.888 [2024-12-02 07:41:50.501760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1bc10 is same with the state(5) to be set 00:14:24.888 07:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.888 07:41:50 -- host/discovery.sh@127 -- # sleep 1 00:14:24.888 [2024-12-02 07:41:50.506064] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:14:24.888 [2024-12-02 07:41:50.506133] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:24.888 [2024-12-02 07:41:50.506191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1bc10 (9): Bad file descriptor 00:14:26.265 07:41:51 -- host/discovery.sh@128 -- # get_subsystem_names 00:14:26.265 07:41:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:26.265 07:41:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:26.265 07:41:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.265 07:41:51 -- common/autotest_common.sh@10 -- # set +x 00:14:26.265 07:41:51 -- host/discovery.sh@59 -- # sort 00:14:26.265 07:41:51 -- host/discovery.sh@59 -- # xargs 00:14:26.265 07:41:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@129 -- # get_bdev_list 00:14:26.265 07:41:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:26.265 07:41:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:26.265 07:41:51 -- host/discovery.sh@55 -- # sort 00:14:26.265 07:41:51 -- host/discovery.sh@55 -- # xargs 00:14:26.265 07:41:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.265 07:41:51 -- common/autotest_common.sh@10 -- # set +x 00:14:26.265 07:41:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:14:26.265 07:41:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:26.265 07:41:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:26.265 07:41:51 -- host/discovery.sh@63 -- # sort -n 00:14:26.265 07:41:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.265 07:41:51 -- common/autotest_common.sh@10 -- # set +x 00:14:26.265 07:41:51 -- host/discovery.sh@63 -- # xargs 00:14:26.265 07:41:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@131 -- # get_notification_count 00:14:26.265 07:41:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:26.265 07:41:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.265 07:41:51 -- common/autotest_common.sh@10 -- # set +x 00:14:26.265 07:41:51 -- host/discovery.sh@74 -- # jq '. | length' 00:14:26.265 07:41:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@74 -- # notification_count=0 00:14:26.265 07:41:51 -- host/discovery.sh@75 -- # notify_id=2 00:14:26.265 07:41:51 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:26.265 07:41:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.265 07:41:51 -- common/autotest_common.sh@10 -- # set +x 00:14:26.265 07:41:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.265 07:41:51 -- host/discovery.sh@135 -- # sleep 1 00:14:27.202 07:41:52 -- host/discovery.sh@136 -- # get_subsystem_names 00:14:27.202 07:41:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:27.202 07:41:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:27.202 07:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.202 07:41:52 -- common/autotest_common.sh@10 -- # set +x 00:14:27.202 07:41:52 -- host/discovery.sh@59 -- # xargs 00:14:27.202 07:41:52 -- host/discovery.sh@59 -- # sort 00:14:27.202 07:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.202 07:41:52 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:14:27.202 07:41:52 -- host/discovery.sh@137 -- # get_bdev_list 00:14:27.202 07:41:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:27.202 07:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.202 07:41:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:27.202 07:41:52 -- common/autotest_common.sh@10 -- # set +x 00:14:27.202 07:41:52 -- host/discovery.sh@55 -- # sort 00:14:27.202 07:41:52 -- host/discovery.sh@55 -- # xargs 00:14:27.202 07:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.461 07:41:52 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:14:27.461 07:41:52 -- host/discovery.sh@138 -- # get_notification_count 00:14:27.461 07:41:52 -- host/discovery.sh@74 -- # jq '. | length' 00:14:27.461 07:41:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:27.461 07:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.461 07:41:52 -- common/autotest_common.sh@10 -- # set +x 00:14:27.461 07:41:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.461 07:41:52 -- host/discovery.sh@74 -- # notification_count=2 00:14:27.461 07:41:52 -- host/discovery.sh@75 -- # notify_id=4 00:14:27.461 07:41:52 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:14:27.461 07:41:52 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:27.461 07:41:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.461 07:41:52 -- common/autotest_common.sh@10 -- # set +x 00:14:28.397 [2024-12-02 07:41:53.913824] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:28.397 [2024-12-02 07:41:53.913848] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:28.398 [2024-12-02 07:41:53.913880] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:28.398 [2024-12-02 07:41:53.919856] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:14:28.398 [2024-12-02 07:41:53.978955] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:28.398 [2024-12-02 07:41:53.979006] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:28.398 07:41:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.398 07:41:53 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:28.398 07:41:53 -- common/autotest_common.sh@650 -- # local es=0 00:14:28.398 07:41:53 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:28.398 07:41:53 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:28.398 07:41:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.398 07:41:53 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:28.398 07:41:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.398 07:41:53 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:28.398 07:41:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.398 07:41:53 -- common/autotest_common.sh@10 -- # set +x 00:14:28.398 request: 00:14:28.398 { 00:14:28.398 "name": "nvme", 00:14:28.398 "trtype": "tcp", 00:14:28.398 "traddr": "10.0.0.2", 00:14:28.398 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:28.398 "adrfam": "ipv4", 00:14:28.398 "trsvcid": "8009", 00:14:28.398 "wait_for_attach": true, 00:14:28.398 "method": "bdev_nvme_start_discovery", 00:14:28.398 "req_id": 1 00:14:28.398 } 00:14:28.398 Got JSON-RPC error response 00:14:28.398 response: 00:14:28.398 { 00:14:28.398 "code": -17, 00:14:28.398 "message": "File exists" 00:14:28.398 } 00:14:28.398 07:41:53 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:28.398 07:41:53 -- common/autotest_common.sh@653 -- # es=1 00:14:28.398 07:41:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.398 07:41:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.398 07:41:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.398 07:41:53 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:14:28.398 07:41:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:28.398 07:41:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:28.398 07:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.398 07:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:28.398 07:41:54 -- host/discovery.sh@67 -- # xargs 00:14:28.398 07:41:54 -- host/discovery.sh@67 -- # sort 00:14:28.398 07:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.657 07:41:54 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:14:28.657 07:41:54 -- host/discovery.sh@147 -- # get_bdev_list 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:28.657 07:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.657 07:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # sort 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # xargs 00:14:28.657 07:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.657 07:41:54 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:28.657 07:41:54 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:28.657 07:41:54 -- common/autotest_common.sh@650 -- # local es=0 00:14:28.657 07:41:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:28.657 07:41:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:28.657 07:41:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.657 07:41:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:28.657 07:41:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.657 07:41:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:28.657 07:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.657 07:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:28.657 request: 00:14:28.657 { 00:14:28.657 "name": "nvme_second", 00:14:28.657 "trtype": "tcp", 00:14:28.657 "traddr": "10.0.0.2", 00:14:28.657 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:28.657 "adrfam": "ipv4", 00:14:28.657 "trsvcid": "8009", 00:14:28.657 "wait_for_attach": true, 00:14:28.657 "method": "bdev_nvme_start_discovery", 00:14:28.657 "req_id": 1 00:14:28.657 } 00:14:28.657 Got JSON-RPC error response 00:14:28.657 response: 00:14:28.657 { 00:14:28.657 "code": -17, 00:14:28.657 "message": "File exists" 00:14:28.657 } 00:14:28.657 07:41:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:28.657 07:41:54 -- common/autotest_common.sh@653 -- # es=1 00:14:28.657 07:41:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.657 07:41:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.657 07:41:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.657 07:41:54 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:14:28.657 07:41:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:28.657 07:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.657 07:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:28.657 07:41:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:28.657 07:41:54 -- host/discovery.sh@67 -- # sort 00:14:28.657 07:41:54 -- host/discovery.sh@67 -- # xargs 00:14:28.657 07:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.657 07:41:54 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:14:28.657 07:41:54 -- host/discovery.sh@153 -- # get_bdev_list 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:28.657 07:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.657 07:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # sort 00:14:28.657 07:41:54 -- host/discovery.sh@55 -- # xargs 00:14:28.657 07:41:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.657 07:41:54 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:28.657 07:41:54 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:28.657 07:41:54 -- common/autotest_common.sh@650 -- # local es=0 00:14:28.657 07:41:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:28.657 07:41:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:28.657 07:41:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.657 07:41:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:28.657 07:41:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.657 07:41:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:28.657 07:41:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.657 07:41:54 -- common/autotest_common.sh@10 -- # set +x 00:14:30.033 [2024-12-02 07:41:55.237064] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:30.033 [2024-12-02 07:41:55.237178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:30.033 [2024-12-02 07:41:55.237220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:30.033 [2024-12-02 07:41:55.237235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1e8a0 with addr=10.0.0.2, port=8010 00:14:30.033 [2024-12-02 07:41:55.237250] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:30.033 [2024-12-02 07:41:55.237258] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:30.033 [2024-12-02 07:41:55.237266] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:30.969 [2024-12-02 07:41:56.237041] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:30.969 [2024-12-02 07:41:56.237138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:30.969 [2024-12-02 07:41:56.237177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:30.969 [2024-12-02 07:41:56.237192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1e8a0 with addr=10.0.0.2, port=8010 00:14:30.969 [2024-12-02 07:41:56.237206] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:30.969 [2024-12-02 07:41:56.237213] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:30.969 [2024-12-02 07:41:56.237221] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:31.906 [2024-12-02 07:41:57.236970] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:14:31.906 request: 00:14:31.906 { 00:14:31.906 "name": "nvme_second", 00:14:31.906 "trtype": "tcp", 00:14:31.906 "traddr": "10.0.0.2", 00:14:31.906 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:31.906 "adrfam": "ipv4", 00:14:31.906 "trsvcid": "8010", 00:14:31.906 "attach_timeout_ms": 3000, 00:14:31.906 "method": "bdev_nvme_start_discovery", 00:14:31.906 "req_id": 1 00:14:31.906 } 00:14:31.906 Got JSON-RPC error response 00:14:31.906 response: 00:14:31.906 { 00:14:31.906 "code": -110, 00:14:31.906 "message": "Connection timed out" 00:14:31.906 } 00:14:31.906 07:41:57 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:31.906 07:41:57 -- common/autotest_common.sh@653 -- # es=1 00:14:31.906 07:41:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.906 07:41:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.906 07:41:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.906 07:41:57 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:14:31.906 07:41:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:31.906 07:41:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.906 07:41:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:31.906 07:41:57 -- common/autotest_common.sh@10 -- # set +x 00:14:31.906 07:41:57 -- host/discovery.sh@67 -- # sort 00:14:31.906 07:41:57 -- host/discovery.sh@67 -- # xargs 00:14:31.906 07:41:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.906 07:41:57 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:14:31.906 07:41:57 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:14:31.906 07:41:57 -- host/discovery.sh@162 -- # kill 70541 00:14:31.906 07:41:57 -- host/discovery.sh@163 -- # nvmftestfini 00:14:31.906 07:41:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:31.906 07:41:57 -- nvmf/common.sh@116 -- # sync 00:14:31.906 07:41:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:31.906 07:41:57 -- nvmf/common.sh@119 -- # set +e 00:14:31.906 07:41:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:31.906 07:41:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:31.906 rmmod nvme_tcp 00:14:31.906 rmmod nvme_fabrics 00:14:31.906 rmmod nvme_keyring 00:14:31.906 07:41:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:31.906 07:41:57 -- nvmf/common.sh@123 -- # set -e 00:14:31.906 07:41:57 -- nvmf/common.sh@124 -- # return 0 00:14:31.906 07:41:57 -- nvmf/common.sh@477 -- # '[' -n 70507 ']' 00:14:31.906 07:41:57 -- nvmf/common.sh@478 -- # killprocess 70507 00:14:31.906 07:41:57 -- common/autotest_common.sh@936 -- # '[' -z 70507 ']' 00:14:31.906 07:41:57 -- common/autotest_common.sh@940 -- # kill -0 70507 00:14:31.906 07:41:57 -- common/autotest_common.sh@941 -- # uname 00:14:31.906 07:41:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.906 07:41:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70507 00:14:31.906 07:41:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:31.906 killing process with pid 70507 00:14:31.906 07:41:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:31.906 07:41:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70507' 00:14:31.906 07:41:57 -- common/autotest_common.sh@955 -- # kill 70507 00:14:31.906 07:41:57 -- common/autotest_common.sh@960 -- # wait 70507 00:14:32.165 07:41:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:32.165 07:41:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:32.165 07:41:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:32.165 07:41:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.165 07:41:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:32.165 07:41:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.165 07:41:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.165 07:41:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.165 07:41:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:32.165 00:14:32.165 real 0m13.874s 00:14:32.165 user 0m26.764s 00:14:32.165 sys 0m2.084s 00:14:32.165 07:41:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:32.165 07:41:57 -- common/autotest_common.sh@10 -- # set +x 00:14:32.165 ************************************ 00:14:32.165 END TEST nvmf_discovery 00:14:32.165 ************************************ 00:14:32.165 07:41:57 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:32.165 07:41:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:32.165 07:41:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.165 07:41:57 -- common/autotest_common.sh@10 -- # set +x 00:14:32.165 ************************************ 00:14:32.165 START TEST nvmf_discovery_remove_ifc 00:14:32.165 ************************************ 00:14:32.165 07:41:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:32.165 * Looking for test storage... 00:14:32.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:32.165 07:41:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:32.165 07:41:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:32.165 07:41:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:32.424 07:41:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:32.424 07:41:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:32.424 07:41:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:32.424 07:41:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:32.424 07:41:57 -- scripts/common.sh@335 -- # IFS=.-: 00:14:32.424 07:41:57 -- scripts/common.sh@335 -- # read -ra ver1 00:14:32.424 07:41:57 -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.424 07:41:57 -- scripts/common.sh@336 -- # read -ra ver2 00:14:32.424 07:41:57 -- scripts/common.sh@337 -- # local 'op=<' 00:14:32.424 07:41:57 -- scripts/common.sh@339 -- # ver1_l=2 00:14:32.424 07:41:57 -- scripts/common.sh@340 -- # ver2_l=1 00:14:32.424 07:41:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:32.424 07:41:57 -- scripts/common.sh@343 -- # case "$op" in 00:14:32.424 07:41:57 -- scripts/common.sh@344 -- # : 1 00:14:32.424 07:41:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:32.424 07:41:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.424 07:41:57 -- scripts/common.sh@364 -- # decimal 1 00:14:32.424 07:41:57 -- scripts/common.sh@352 -- # local d=1 00:14:32.424 07:41:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.424 07:41:57 -- scripts/common.sh@354 -- # echo 1 00:14:32.424 07:41:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:32.424 07:41:57 -- scripts/common.sh@365 -- # decimal 2 00:14:32.424 07:41:57 -- scripts/common.sh@352 -- # local d=2 00:14:32.424 07:41:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.424 07:41:57 -- scripts/common.sh@354 -- # echo 2 00:14:32.424 07:41:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:32.424 07:41:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:32.424 07:41:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:32.424 07:41:57 -- scripts/common.sh@367 -- # return 0 00:14:32.424 07:41:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.424 07:41:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:32.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.425 --rc genhtml_branch_coverage=1 00:14:32.425 --rc genhtml_function_coverage=1 00:14:32.425 --rc genhtml_legend=1 00:14:32.425 --rc geninfo_all_blocks=1 00:14:32.425 --rc geninfo_unexecuted_blocks=1 00:14:32.425 00:14:32.425 ' 00:14:32.425 07:41:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:32.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.425 --rc genhtml_branch_coverage=1 00:14:32.425 --rc genhtml_function_coverage=1 00:14:32.425 --rc genhtml_legend=1 00:14:32.425 --rc geninfo_all_blocks=1 00:14:32.425 --rc geninfo_unexecuted_blocks=1 00:14:32.425 00:14:32.425 ' 00:14:32.425 07:41:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:32.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.425 --rc genhtml_branch_coverage=1 00:14:32.425 --rc genhtml_function_coverage=1 00:14:32.425 --rc genhtml_legend=1 00:14:32.425 --rc geninfo_all_blocks=1 00:14:32.425 --rc geninfo_unexecuted_blocks=1 00:14:32.425 00:14:32.425 ' 00:14:32.425 07:41:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:32.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.425 --rc genhtml_branch_coverage=1 00:14:32.425 --rc genhtml_function_coverage=1 00:14:32.425 --rc genhtml_legend=1 00:14:32.425 --rc geninfo_all_blocks=1 00:14:32.425 --rc geninfo_unexecuted_blocks=1 00:14:32.425 00:14:32.425 ' 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:32.425 07:41:57 -- nvmf/common.sh@7 -- # uname -s 00:14:32.425 07:41:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.425 07:41:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.425 07:41:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.425 07:41:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.425 07:41:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.425 07:41:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.425 07:41:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.425 07:41:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.425 07:41:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.425 07:41:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.425 07:41:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:14:32.425 07:41:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:14:32.425 07:41:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.425 07:41:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.425 07:41:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:32.425 07:41:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.425 07:41:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.425 07:41:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.425 07:41:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.425 07:41:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.425 07:41:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.425 07:41:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.425 07:41:57 -- paths/export.sh@5 -- # export PATH 00:14:32.425 07:41:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.425 07:41:57 -- nvmf/common.sh@46 -- # : 0 00:14:32.425 07:41:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:32.425 07:41:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:32.425 07:41:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:32.425 07:41:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.425 07:41:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.425 07:41:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:32.425 07:41:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:32.425 07:41:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:14:32.425 07:41:57 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:14:32.425 07:41:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:32.425 07:41:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.425 07:41:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:32.425 07:41:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:32.425 07:41:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:32.425 07:41:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.425 07:41:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.425 07:41:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.425 07:41:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:32.425 07:41:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:32.425 07:41:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:32.425 07:41:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:32.425 07:41:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:32.425 07:41:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:32.425 07:41:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.425 07:41:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.425 07:41:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:32.425 07:41:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:32.425 07:41:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:32.425 07:41:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:32.425 07:41:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:32.425 07:41:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.425 07:41:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:32.425 07:41:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:32.425 07:41:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:32.425 07:41:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:32.425 07:41:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:32.425 07:41:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:32.425 Cannot find device "nvmf_tgt_br" 00:14:32.425 07:41:57 -- nvmf/common.sh@154 -- # true 00:14:32.425 07:41:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:32.425 Cannot find device "nvmf_tgt_br2" 00:14:32.425 07:41:57 -- nvmf/common.sh@155 -- # true 00:14:32.425 07:41:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:32.425 07:41:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:32.425 Cannot find device "nvmf_tgt_br" 00:14:32.425 07:41:57 -- nvmf/common.sh@157 -- # true 00:14:32.425 07:41:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:32.425 Cannot find device "nvmf_tgt_br2" 00:14:32.425 07:41:57 -- nvmf/common.sh@158 -- # true 00:14:32.425 07:41:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:32.425 07:41:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:32.425 07:41:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.425 07:41:58 -- nvmf/common.sh@161 -- # true 00:14:32.425 07:41:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.425 07:41:58 -- nvmf/common.sh@162 -- # true 00:14:32.425 07:41:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:32.425 07:41:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:32.425 07:41:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:32.425 07:41:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:32.425 07:41:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:32.685 07:41:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:32.685 07:41:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:32.685 07:41:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:32.685 07:41:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:32.685 07:41:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:32.685 07:41:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:32.685 07:41:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:32.685 07:41:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:32.685 07:41:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:32.685 07:41:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:32.685 07:41:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:32.685 07:41:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:32.685 07:41:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:32.685 07:41:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:32.685 07:41:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:32.685 07:41:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:32.685 07:41:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:32.685 07:41:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:32.685 07:41:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:32.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:32.685 00:14:32.685 --- 10.0.0.2 ping statistics --- 00:14:32.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.685 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:32.685 07:41:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:32.685 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:32.685 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:32.685 00:14:32.685 --- 10.0.0.3 ping statistics --- 00:14:32.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.685 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:32.685 07:41:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:32.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:32.685 00:14:32.685 --- 10.0.0.1 ping statistics --- 00:14:32.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.685 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:32.685 07:41:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.685 07:41:58 -- nvmf/common.sh@421 -- # return 0 00:14:32.685 07:41:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:32.685 07:41:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.685 07:41:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:32.685 07:41:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:32.685 07:41:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.685 07:41:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:32.685 07:41:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:32.685 07:41:58 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:14:32.685 07:41:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:32.685 07:41:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.685 07:41:58 -- common/autotest_common.sh@10 -- # set +x 00:14:32.685 07:41:58 -- nvmf/common.sh@469 -- # nvmfpid=71042 00:14:32.685 07:41:58 -- nvmf/common.sh@470 -- # waitforlisten 71042 00:14:32.685 07:41:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:32.685 07:41:58 -- common/autotest_common.sh@829 -- # '[' -z 71042 ']' 00:14:32.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.685 07:41:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.685 07:41:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.685 07:41:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.685 07:41:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.685 07:41:58 -- common/autotest_common.sh@10 -- # set +x 00:14:32.685 [2024-12-02 07:41:58.295839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:32.685 [2024-12-02 07:41:58.295919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.955 [2024-12-02 07:41:58.428916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.955 [2024-12-02 07:41:58.478191] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:32.955 [2024-12-02 07:41:58.478555] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.956 [2024-12-02 07:41:58.478673] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.956 [2024-12-02 07:41:58.478773] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.956 [2024-12-02 07:41:58.478859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.892 07:41:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.892 07:41:59 -- common/autotest_common.sh@862 -- # return 0 00:14:33.892 07:41:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:33.892 07:41:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.892 07:41:59 -- common/autotest_common.sh@10 -- # set +x 00:14:33.892 07:41:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.892 07:41:59 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:14:33.892 07:41:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.892 07:41:59 -- common/autotest_common.sh@10 -- # set +x 00:14:33.892 [2024-12-02 07:41:59.209939] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.892 [2024-12-02 07:41:59.218040] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:33.892 null0 00:14:33.892 [2024-12-02 07:41:59.249989] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.892 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:33.892 07:41:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.892 07:41:59 -- host/discovery_remove_ifc.sh@59 -- # hostpid=71074 00:14:33.892 07:41:59 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:14:33.892 07:41:59 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 71074 /tmp/host.sock 00:14:33.892 07:41:59 -- common/autotest_common.sh@829 -- # '[' -z 71074 ']' 00:14:33.892 07:41:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:14:33.892 07:41:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.892 07:41:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:33.892 07:41:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.892 07:41:59 -- common/autotest_common.sh@10 -- # set +x 00:14:33.892 [2024-12-02 07:41:59.328517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:33.892 [2024-12-02 07:41:59.328807] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71074 ] 00:14:33.892 [2024-12-02 07:41:59.461697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.892 [2024-12-02 07:41:59.512965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:33.892 [2024-12-02 07:41:59.513346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.151 07:41:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.151 07:41:59 -- common/autotest_common.sh@862 -- # return 0 00:14:34.151 07:41:59 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.151 07:41:59 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:14:34.151 07:41:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.151 07:41:59 -- common/autotest_common.sh@10 -- # set +x 00:14:34.151 07:41:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.151 07:41:59 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:14:34.151 07:41:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.151 07:41:59 -- common/autotest_common.sh@10 -- # set +x 00:14:34.151 07:41:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.151 07:41:59 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:14:34.151 07:41:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.151 07:41:59 -- common/autotest_common.sh@10 -- # set +x 00:14:35.088 [2024-12-02 07:42:00.632982] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:35.088 [2024-12-02 07:42:00.633195] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:35.088 [2024-12-02 07:42:00.633244] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:35.088 [2024-12-02 07:42:00.639026] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:35.088 [2024-12-02 07:42:00.694803] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:14:35.088 [2024-12-02 07:42:00.695008] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:14:35.088 [2024-12-02 07:42:00.695077] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:14:35.088 [2024-12-02 07:42:00.695193] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:35.088 [2024-12-02 07:42:00.695272] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:35.088 07:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.088 07:42:00 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:14:35.088 07:42:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:35.088 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:35.088 [2024-12-02 07:42:00.701760] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1dfcbe0 was disconnected and freed. delete nvme_qpair. 00:14:35.089 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:35.089 07:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.089 07:42:00 -- common/autotest_common.sh@10 -- # set +x 00:14:35.089 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:35.089 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:35.348 07:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:35.348 07:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:35.348 07:42:00 -- common/autotest_common.sh@10 -- # set +x 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:35.348 07:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:35.348 07:42:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:36.284 07:42:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:36.284 07:42:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:36.284 07:42:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:36.284 07:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.284 07:42:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:36.284 07:42:01 -- common/autotest_common.sh@10 -- # set +x 00:14:36.284 07:42:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:36.284 07:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.284 07:42:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:36.284 07:42:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:37.660 07:42:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:37.660 07:42:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:37.660 07:42:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:37.660 07:42:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.660 07:42:02 -- common/autotest_common.sh@10 -- # set +x 00:14:37.660 07:42:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:37.660 07:42:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:37.660 07:42:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.660 07:42:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:37.660 07:42:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:38.597 07:42:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:38.597 07:42:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:38.597 07:42:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.597 07:42:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:38.597 07:42:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:38.597 07:42:03 -- common/autotest_common.sh@10 -- # set +x 00:14:38.597 07:42:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:38.597 07:42:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.597 07:42:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:38.597 07:42:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:39.533 07:42:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:39.533 07:42:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:39.533 07:42:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:39.533 07:42:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.533 07:42:05 -- common/autotest_common.sh@10 -- # set +x 00:14:39.533 07:42:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:39.533 07:42:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:39.533 07:42:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.533 07:42:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:39.533 07:42:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:40.468 07:42:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:40.468 07:42:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:40.468 07:42:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.468 07:42:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:40.468 07:42:06 -- common/autotest_common.sh@10 -- # set +x 00:14:40.468 07:42:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:40.468 07:42:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:40.468 07:42:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.727 07:42:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:40.727 07:42:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:40.727 [2024-12-02 07:42:06.123121] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:14:40.727 [2024-12-02 07:42:06.123367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.727 [2024-12-02 07:42:06.123388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.727 [2024-12-02 07:42:06.123402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.727 [2024-12-02 07:42:06.123412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.727 [2024-12-02 07:42:06.123422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.727 [2024-12-02 07:42:06.123432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.727 [2024-12-02 07:42:06.123442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.727 [2024-12-02 07:42:06.123452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.727 [2024-12-02 07:42:06.123463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.727 [2024-12-02 07:42:06.123473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.727 [2024-12-02 07:42:06.123482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71de0 is same with the state(5) to be set 00:14:40.727 [2024-12-02 07:42:06.133113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d71de0 (9): Bad file descriptor 00:14:40.727 [2024-12-02 07:42:06.143135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:41.664 07:42:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:41.664 07:42:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:41.664 07:42:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:41.664 07:42:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:41.664 07:42:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:41.664 07:42:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.664 07:42:07 -- common/autotest_common.sh@10 -- # set +x 00:14:41.664 [2024-12-02 07:42:07.166360] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:14:42.601 [2024-12-02 07:42:08.190381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:14:43.979 [2024-12-02 07:42:09.214387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:14:43.979 [2024-12-02 07:42:09.214505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d71de0 with addr=10.0.0.2, port=4420 00:14:43.979 [2024-12-02 07:42:09.214539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71de0 is same with the state(5) to be set 00:14:43.979 [2024-12-02 07:42:09.214589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:43.979 [2024-12-02 07:42:09.214612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:43.979 [2024-12-02 07:42:09.214631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:43.979 [2024-12-02 07:42:09.214652] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:14:43.979 [2024-12-02 07:42:09.215457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d71de0 (9): Bad file descriptor 00:14:43.979 [2024-12-02 07:42:09.215520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:43.979 [2024-12-02 07:42:09.215593] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:14:43.979 [2024-12-02 07:42:09.215674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.979 [2024-12-02 07:42:09.215707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.979 [2024-12-02 07:42:09.215745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.979 [2024-12-02 07:42:09.215768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.979 [2024-12-02 07:42:09.215790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.979 [2024-12-02 07:42:09.215810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.979 [2024-12-02 07:42:09.215832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.980 [2024-12-02 07:42:09.215853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.980 [2024-12-02 07:42:09.215875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.980 [2024-12-02 07:42:09.215895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.980 [2024-12-02 07:42:09.215915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:14:43.980 [2024-12-02 07:42:09.215982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d721f0 (9): Bad file descriptor 00:14:43.980 [2024-12-02 07:42:09.216977] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:14:43.980 [2024-12-02 07:42:09.217024] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:14:43.980 07:42:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.980 07:42:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:43.980 07:42:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:44.916 07:42:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.916 07:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:44.916 07:42:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:44.916 07:42:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.916 07:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:44.916 07:42:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:14:44.916 07:42:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:45.896 [2024-12-02 07:42:11.227594] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:45.896 [2024-12-02 07:42:11.227760] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:45.896 [2024-12-02 07:42:11.227793] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:45.896 [2024-12-02 07:42:11.233634] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:14:45.896 [2024-12-02 07:42:11.288372] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:14:45.896 [2024-12-02 07:42:11.288565] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:14:45.896 [2024-12-02 07:42:11.288628] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:14:45.896 [2024-12-02 07:42:11.288733] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:14:45.896 [2024-12-02 07:42:11.288792] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:45.896 [2024-12-02 07:42:11.296147] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1db3ce0 was disconnected and freed. delete nvme_qpair. 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:45.896 07:42:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:45.896 07:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:45.896 07:42:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:14:45.896 07:42:11 -- host/discovery_remove_ifc.sh@90 -- # killprocess 71074 00:14:45.896 07:42:11 -- common/autotest_common.sh@936 -- # '[' -z 71074 ']' 00:14:45.896 07:42:11 -- common/autotest_common.sh@940 -- # kill -0 71074 00:14:45.896 07:42:11 -- common/autotest_common.sh@941 -- # uname 00:14:45.896 07:42:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.896 07:42:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71074 00:14:45.896 killing process with pid 71074 00:14:45.896 07:42:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.896 07:42:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.896 07:42:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71074' 00:14:45.896 07:42:11 -- common/autotest_common.sh@955 -- # kill 71074 00:14:45.896 07:42:11 -- common/autotest_common.sh@960 -- # wait 71074 00:14:46.154 07:42:11 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:14:46.154 07:42:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:46.154 07:42:11 -- nvmf/common.sh@116 -- # sync 00:14:46.154 07:42:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:46.154 07:42:11 -- nvmf/common.sh@119 -- # set +e 00:14:46.154 07:42:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:46.154 07:42:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:46.154 rmmod nvme_tcp 00:14:46.154 rmmod nvme_fabrics 00:14:46.154 rmmod nvme_keyring 00:14:46.155 07:42:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:46.155 07:42:11 -- nvmf/common.sh@123 -- # set -e 00:14:46.155 07:42:11 -- nvmf/common.sh@124 -- # return 0 00:14:46.155 07:42:11 -- nvmf/common.sh@477 -- # '[' -n 71042 ']' 00:14:46.155 07:42:11 -- nvmf/common.sh@478 -- # killprocess 71042 00:14:46.155 07:42:11 -- common/autotest_common.sh@936 -- # '[' -z 71042 ']' 00:14:46.155 07:42:11 -- common/autotest_common.sh@940 -- # kill -0 71042 00:14:46.155 07:42:11 -- common/autotest_common.sh@941 -- # uname 00:14:46.155 07:42:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.155 07:42:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71042 00:14:46.155 killing process with pid 71042 00:14:46.155 07:42:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:46.155 07:42:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:46.155 07:42:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71042' 00:14:46.155 07:42:11 -- common/autotest_common.sh@955 -- # kill 71042 00:14:46.155 07:42:11 -- common/autotest_common.sh@960 -- # wait 71042 00:14:46.412 07:42:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:46.412 07:42:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:46.412 07:42:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:46.412 07:42:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.412 07:42:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:46.412 07:42:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.412 07:42:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.412 07:42:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.412 07:42:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:46.412 ************************************ 00:14:46.412 END TEST nvmf_discovery_remove_ifc 00:14:46.412 ************************************ 00:14:46.412 00:14:46.412 real 0m14.254s 00:14:46.412 user 0m22.490s 00:14:46.412 sys 0m2.297s 00:14:46.412 07:42:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.412 07:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:46.412 07:42:11 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:14:46.412 07:42:11 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:14:46.412 07:42:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:46.412 07:42:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.412 07:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:46.413 ************************************ 00:14:46.413 START TEST nvmf_digest 00:14:46.413 ************************************ 00:14:46.413 07:42:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:14:46.671 * Looking for test storage... 00:14:46.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:46.671 07:42:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:46.671 07:42:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:46.671 07:42:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:46.671 07:42:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:46.671 07:42:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:46.671 07:42:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:46.671 07:42:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:46.671 07:42:12 -- scripts/common.sh@335 -- # IFS=.-: 00:14:46.671 07:42:12 -- scripts/common.sh@335 -- # read -ra ver1 00:14:46.671 07:42:12 -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.671 07:42:12 -- scripts/common.sh@336 -- # read -ra ver2 00:14:46.671 07:42:12 -- scripts/common.sh@337 -- # local 'op=<' 00:14:46.671 07:42:12 -- scripts/common.sh@339 -- # ver1_l=2 00:14:46.671 07:42:12 -- scripts/common.sh@340 -- # ver2_l=1 00:14:46.671 07:42:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:46.671 07:42:12 -- scripts/common.sh@343 -- # case "$op" in 00:14:46.671 07:42:12 -- scripts/common.sh@344 -- # : 1 00:14:46.671 07:42:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:46.671 07:42:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.671 07:42:12 -- scripts/common.sh@364 -- # decimal 1 00:14:46.671 07:42:12 -- scripts/common.sh@352 -- # local d=1 00:14:46.671 07:42:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.671 07:42:12 -- scripts/common.sh@354 -- # echo 1 00:14:46.671 07:42:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:46.671 07:42:12 -- scripts/common.sh@365 -- # decimal 2 00:14:46.671 07:42:12 -- scripts/common.sh@352 -- # local d=2 00:14:46.671 07:42:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.671 07:42:12 -- scripts/common.sh@354 -- # echo 2 00:14:46.671 07:42:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:46.671 07:42:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:46.671 07:42:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:46.671 07:42:12 -- scripts/common.sh@367 -- # return 0 00:14:46.671 07:42:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.671 07:42:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.671 --rc genhtml_branch_coverage=1 00:14:46.671 --rc genhtml_function_coverage=1 00:14:46.671 --rc genhtml_legend=1 00:14:46.671 --rc geninfo_all_blocks=1 00:14:46.671 --rc geninfo_unexecuted_blocks=1 00:14:46.671 00:14:46.671 ' 00:14:46.671 07:42:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.671 --rc genhtml_branch_coverage=1 00:14:46.671 --rc genhtml_function_coverage=1 00:14:46.671 --rc genhtml_legend=1 00:14:46.671 --rc geninfo_all_blocks=1 00:14:46.671 --rc geninfo_unexecuted_blocks=1 00:14:46.671 00:14:46.671 ' 00:14:46.671 07:42:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.671 --rc genhtml_branch_coverage=1 00:14:46.671 --rc genhtml_function_coverage=1 00:14:46.671 --rc genhtml_legend=1 00:14:46.671 --rc geninfo_all_blocks=1 00:14:46.671 --rc geninfo_unexecuted_blocks=1 00:14:46.671 00:14:46.671 ' 00:14:46.671 07:42:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.671 --rc genhtml_branch_coverage=1 00:14:46.671 --rc genhtml_function_coverage=1 00:14:46.671 --rc genhtml_legend=1 00:14:46.671 --rc geninfo_all_blocks=1 00:14:46.671 --rc geninfo_unexecuted_blocks=1 00:14:46.671 00:14:46.671 ' 00:14:46.671 07:42:12 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.671 07:42:12 -- nvmf/common.sh@7 -- # uname -s 00:14:46.671 07:42:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.671 07:42:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.671 07:42:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.671 07:42:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.671 07:42:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.671 07:42:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.671 07:42:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.671 07:42:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.671 07:42:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.671 07:42:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.671 07:42:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:14:46.671 07:42:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:14:46.671 07:42:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.671 07:42:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.671 07:42:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.671 07:42:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.671 07:42:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.671 07:42:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.671 07:42:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.671 07:42:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.671 07:42:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.671 07:42:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.671 07:42:12 -- paths/export.sh@5 -- # export PATH 00:14:46.672 07:42:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.672 07:42:12 -- nvmf/common.sh@46 -- # : 0 00:14:46.672 07:42:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:46.672 07:42:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:46.672 07:42:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:46.672 07:42:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.672 07:42:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.672 07:42:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:46.672 07:42:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:46.672 07:42:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:46.672 07:42:12 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:46.672 07:42:12 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:14:46.672 07:42:12 -- host/digest.sh@16 -- # runtime=2 00:14:46.672 07:42:12 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:14:46.672 07:42:12 -- host/digest.sh@132 -- # nvmftestinit 00:14:46.672 07:42:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:46.672 07:42:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.672 07:42:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:46.672 07:42:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:46.672 07:42:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:46.672 07:42:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.672 07:42:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.672 07:42:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.672 07:42:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:46.672 07:42:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:46.672 07:42:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:46.672 07:42:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:46.672 07:42:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:46.672 07:42:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:46.672 07:42:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.672 07:42:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.672 07:42:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:46.672 07:42:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:46.672 07:42:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.672 07:42:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.672 07:42:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.672 07:42:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.672 07:42:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.672 07:42:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.672 07:42:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.672 07:42:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.672 07:42:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:46.672 07:42:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:46.672 Cannot find device "nvmf_tgt_br" 00:14:46.672 07:42:12 -- nvmf/common.sh@154 -- # true 00:14:46.672 07:42:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.672 Cannot find device "nvmf_tgt_br2" 00:14:46.672 07:42:12 -- nvmf/common.sh@155 -- # true 00:14:46.672 07:42:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:46.672 07:42:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:46.672 Cannot find device "nvmf_tgt_br" 00:14:46.672 07:42:12 -- nvmf/common.sh@157 -- # true 00:14:46.672 07:42:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:46.672 Cannot find device "nvmf_tgt_br2" 00:14:46.672 07:42:12 -- nvmf/common.sh@158 -- # true 00:14:46.672 07:42:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:46.931 07:42:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:46.931 07:42:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.931 07:42:12 -- nvmf/common.sh@161 -- # true 00:14:46.931 07:42:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.931 07:42:12 -- nvmf/common.sh@162 -- # true 00:14:46.931 07:42:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.931 07:42:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.931 07:42:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.931 07:42:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.931 07:42:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.931 07:42:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.931 07:42:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.931 07:42:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:46.931 07:42:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:46.931 07:42:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:46.931 07:42:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:46.931 07:42:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:46.931 07:42:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:46.931 07:42:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.931 07:42:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.931 07:42:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.931 07:42:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:46.931 07:42:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:46.931 07:42:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.931 07:42:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.931 07:42:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.931 07:42:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.931 07:42:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.931 07:42:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:46.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:46.931 00:14:46.931 --- 10.0.0.2 ping statistics --- 00:14:46.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.931 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:46.931 07:42:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:46.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:14:46.931 00:14:46.931 --- 10.0.0.3 ping statistics --- 00:14:46.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.931 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:46.931 07:42:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:14:46.931 00:14:46.931 --- 10.0.0.1 ping statistics --- 00:14:46.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.931 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:46.931 07:42:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.931 07:42:12 -- nvmf/common.sh@421 -- # return 0 00:14:46.931 07:42:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:46.931 07:42:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.931 07:42:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:46.931 07:42:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:46.931 07:42:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.931 07:42:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:46.931 07:42:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:47.190 07:42:12 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:47.190 07:42:12 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:14:47.190 07:42:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:47.190 07:42:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.190 07:42:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.190 ************************************ 00:14:47.190 START TEST nvmf_digest_clean 00:14:47.190 ************************************ 00:14:47.190 07:42:12 -- common/autotest_common.sh@1114 -- # run_digest 00:14:47.190 07:42:12 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:14:47.190 07:42:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:47.190 07:42:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.190 07:42:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.190 07:42:12 -- nvmf/common.sh@469 -- # nvmfpid=71487 00:14:47.190 07:42:12 -- nvmf/common.sh@470 -- # waitforlisten 71487 00:14:47.190 07:42:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:47.190 07:42:12 -- common/autotest_common.sh@829 -- # '[' -z 71487 ']' 00:14:47.190 07:42:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.190 07:42:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.190 07:42:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.190 07:42:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.190 07:42:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.190 [2024-12-02 07:42:12.624049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:47.190 [2024-12-02 07:42:12.624139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.190 [2024-12-02 07:42:12.757622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.190 [2024-12-02 07:42:12.807209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:47.190 [2024-12-02 07:42:12.807856] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.190 [2024-12-02 07:42:12.807961] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.190 [2024-12-02 07:42:12.808021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.190 [2024-12-02 07:42:12.808166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.450 07:42:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.450 07:42:12 -- common/autotest_common.sh@862 -- # return 0 00:14:47.450 07:42:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:47.450 07:42:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.450 07:42:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.450 07:42:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.450 07:42:12 -- host/digest.sh@120 -- # common_target_config 00:14:47.450 07:42:12 -- host/digest.sh@43 -- # rpc_cmd 00:14:47.450 07:42:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.450 07:42:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.450 null0 00:14:47.450 [2024-12-02 07:42:12.951347] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.450 [2024-12-02 07:42:12.975443] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:47.450 07:42:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.450 07:42:12 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:14:47.450 07:42:12 -- host/digest.sh@77 -- # local rw bs qd 00:14:47.450 07:42:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:47.450 07:42:12 -- host/digest.sh@80 -- # rw=randread 00:14:47.450 07:42:12 -- host/digest.sh@80 -- # bs=4096 00:14:47.450 07:42:12 -- host/digest.sh@80 -- # qd=128 00:14:47.450 07:42:12 -- host/digest.sh@82 -- # bperfpid=71506 00:14:47.450 07:42:12 -- host/digest.sh@83 -- # waitforlisten 71506 /var/tmp/bperf.sock 00:14:47.450 07:42:12 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:14:47.450 07:42:12 -- common/autotest_common.sh@829 -- # '[' -z 71506 ']' 00:14:47.450 07:42:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:47.450 07:42:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.450 07:42:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:47.450 07:42:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.450 07:42:12 -- common/autotest_common.sh@10 -- # set +x 00:14:47.450 [2024-12-02 07:42:13.034173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:47.450 [2024-12-02 07:42:13.034456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71506 ] 00:14:47.709 [2024-12-02 07:42:13.171669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.709 [2024-12-02 07:42:13.223662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.709 07:42:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.709 07:42:13 -- common/autotest_common.sh@862 -- # return 0 00:14:47.709 07:42:13 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:14:47.709 07:42:13 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:14:47.709 07:42:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:47.968 07:42:13 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:47.968 07:42:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:48.535 nvme0n1 00:14:48.535 07:42:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:14:48.535 07:42:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:48.535 Running I/O for 2 seconds... 00:14:50.438 00:14:50.438 Latency(us) 00:14:50.438 [2024-12-02T07:42:16.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.438 [2024-12-02T07:42:16.062Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:14:50.438 nvme0n1 : 2.01 18406.96 71.90 0.00 0.00 6949.96 6285.50 21209.83 00:14:50.438 [2024-12-02T07:42:16.062Z] =================================================================================================================== 00:14:50.438 [2024-12-02T07:42:16.062Z] Total : 18406.96 71.90 0.00 0.00 6949.96 6285.50 21209.83 00:14:50.438 0 00:14:50.696 07:42:16 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:14:50.696 07:42:16 -- host/digest.sh@92 -- # get_accel_stats 00:14:50.696 07:42:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:14:50.696 07:42:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:14:50.696 07:42:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:14:50.696 | select(.opcode=="crc32c") 00:14:50.696 | "\(.module_name) \(.executed)"' 00:14:50.954 07:42:16 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:14:50.954 07:42:16 -- host/digest.sh@93 -- # exp_module=software 00:14:50.954 07:42:16 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:14:50.954 07:42:16 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:50.954 07:42:16 -- host/digest.sh@97 -- # killprocess 71506 00:14:50.954 07:42:16 -- common/autotest_common.sh@936 -- # '[' -z 71506 ']' 00:14:50.954 07:42:16 -- common/autotest_common.sh@940 -- # kill -0 71506 00:14:50.954 07:42:16 -- common/autotest_common.sh@941 -- # uname 00:14:50.954 07:42:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.954 07:42:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71506 00:14:50.954 killing process with pid 71506 00:14:50.954 Received shutdown signal, test time was about 2.000000 seconds 00:14:50.954 00:14:50.954 Latency(us) 00:14:50.954 [2024-12-02T07:42:16.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.954 [2024-12-02T07:42:16.578Z] =================================================================================================================== 00:14:50.954 [2024-12-02T07:42:16.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.954 07:42:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:50.954 07:42:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:50.954 07:42:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71506' 00:14:50.954 07:42:16 -- common/autotest_common.sh@955 -- # kill 71506 00:14:50.954 07:42:16 -- common/autotest_common.sh@960 -- # wait 71506 00:14:50.954 07:42:16 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:14:50.954 07:42:16 -- host/digest.sh@77 -- # local rw bs qd 00:14:50.954 07:42:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:50.954 07:42:16 -- host/digest.sh@80 -- # rw=randread 00:14:50.954 07:42:16 -- host/digest.sh@80 -- # bs=131072 00:14:50.954 07:42:16 -- host/digest.sh@80 -- # qd=16 00:14:50.954 07:42:16 -- host/digest.sh@82 -- # bperfpid=71563 00:14:50.954 07:42:16 -- host/digest.sh@83 -- # waitforlisten 71563 /var/tmp/bperf.sock 00:14:50.954 07:42:16 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:14:50.954 07:42:16 -- common/autotest_common.sh@829 -- # '[' -z 71563 ']' 00:14:50.954 07:42:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:50.954 07:42:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.954 07:42:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:50.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:50.954 07:42:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.954 07:42:16 -- common/autotest_common.sh@10 -- # set +x 00:14:50.954 [2024-12-02 07:42:16.576326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:51.213 [2024-12-02 07:42:16.576603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71563 ] 00:14:51.213 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:51.213 Zero copy mechanism will not be used. 00:14:51.213 [2024-12-02 07:42:16.712921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.213 [2024-12-02 07:42:16.762029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.213 07:42:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.213 07:42:16 -- common/autotest_common.sh@862 -- # return 0 00:14:51.213 07:42:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:14:51.213 07:42:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:14:51.213 07:42:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:51.779 07:42:17 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:51.779 07:42:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:51.779 nvme0n1 00:14:51.779 07:42:17 -- host/digest.sh@91 -- # bperf_py perform_tests 00:14:51.779 07:42:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:52.037 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:52.037 Zero copy mechanism will not be used. 00:14:52.037 Running I/O for 2 seconds... 00:14:53.940 00:14:53.940 Latency(us) 00:14:53.940 [2024-12-02T07:42:19.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.940 [2024-12-02T07:42:19.564Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:14:53.940 nvme0n1 : 2.00 8790.73 1098.84 0.00 0.00 1817.60 1623.51 5928.03 00:14:53.940 [2024-12-02T07:42:19.564Z] =================================================================================================================== 00:14:53.940 [2024-12-02T07:42:19.564Z] Total : 8790.73 1098.84 0.00 0.00 1817.60 1623.51 5928.03 00:14:53.940 0 00:14:53.940 07:42:19 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:14:53.940 07:42:19 -- host/digest.sh@92 -- # get_accel_stats 00:14:53.940 07:42:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:14:53.940 07:42:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:14:53.940 | select(.opcode=="crc32c") 00:14:53.940 | "\(.module_name) \(.executed)"' 00:14:53.940 07:42:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:14:54.199 07:42:19 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:14:54.199 07:42:19 -- host/digest.sh@93 -- # exp_module=software 00:14:54.199 07:42:19 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:14:54.199 07:42:19 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:54.199 07:42:19 -- host/digest.sh@97 -- # killprocess 71563 00:14:54.199 07:42:19 -- common/autotest_common.sh@936 -- # '[' -z 71563 ']' 00:14:54.199 07:42:19 -- common/autotest_common.sh@940 -- # kill -0 71563 00:14:54.199 07:42:19 -- common/autotest_common.sh@941 -- # uname 00:14:54.199 07:42:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.199 07:42:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71563 00:14:54.199 killing process with pid 71563 00:14:54.199 Received shutdown signal, test time was about 2.000000 seconds 00:14:54.199 00:14:54.199 Latency(us) 00:14:54.199 [2024-12-02T07:42:19.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.199 [2024-12-02T07:42:19.823Z] =================================================================================================================== 00:14:54.199 [2024-12-02T07:42:19.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.199 07:42:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:54.199 07:42:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:54.199 07:42:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71563' 00:14:54.199 07:42:19 -- common/autotest_common.sh@955 -- # kill 71563 00:14:54.199 07:42:19 -- common/autotest_common.sh@960 -- # wait 71563 00:14:54.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:54.458 07:42:19 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:14:54.458 07:42:19 -- host/digest.sh@77 -- # local rw bs qd 00:14:54.458 07:42:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:54.458 07:42:19 -- host/digest.sh@80 -- # rw=randwrite 00:14:54.458 07:42:19 -- host/digest.sh@80 -- # bs=4096 00:14:54.458 07:42:19 -- host/digest.sh@80 -- # qd=128 00:14:54.458 07:42:19 -- host/digest.sh@82 -- # bperfpid=71610 00:14:54.458 07:42:19 -- host/digest.sh@83 -- # waitforlisten 71610 /var/tmp/bperf.sock 00:14:54.458 07:42:19 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:14:54.458 07:42:19 -- common/autotest_common.sh@829 -- # '[' -z 71610 ']' 00:14:54.458 07:42:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:54.458 07:42:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.458 07:42:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:54.458 07:42:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.458 07:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:54.458 [2024-12-02 07:42:20.006379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:54.458 [2024-12-02 07:42:20.007281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71610 ] 00:14:54.717 [2024-12-02 07:42:20.144373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.717 [2024-12-02 07:42:20.194801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.717 07:42:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.717 07:42:20 -- common/autotest_common.sh@862 -- # return 0 00:14:54.717 07:42:20 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:14:54.717 07:42:20 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:14:54.717 07:42:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:54.975 07:42:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:54.975 07:42:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:55.232 nvme0n1 00:14:55.232 07:42:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:14:55.232 07:42:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:55.490 Running I/O for 2 seconds... 00:14:57.389 00:14:57.389 Latency(us) 00:14:57.389 [2024-12-02T07:42:23.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.389 [2024-12-02T07:42:23.013Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.389 nvme0n1 : 2.00 19536.05 76.31 0.00 0.00 6547.55 5183.30 17039.36 00:14:57.389 [2024-12-02T07:42:23.013Z] =================================================================================================================== 00:14:57.389 [2024-12-02T07:42:23.013Z] Total : 19536.05 76.31 0.00 0.00 6547.55 5183.30 17039.36 00:14:57.389 0 00:14:57.389 07:42:22 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:14:57.389 07:42:22 -- host/digest.sh@92 -- # get_accel_stats 00:14:57.389 07:42:22 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:14:57.389 07:42:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:14:57.389 07:42:22 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:14:57.389 | select(.opcode=="crc32c") 00:14:57.389 | "\(.module_name) \(.executed)"' 00:14:57.648 07:42:23 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:14:57.648 07:42:23 -- host/digest.sh@93 -- # exp_module=software 00:14:57.648 07:42:23 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:14:57.648 07:42:23 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:57.648 07:42:23 -- host/digest.sh@97 -- # killprocess 71610 00:14:57.648 07:42:23 -- common/autotest_common.sh@936 -- # '[' -z 71610 ']' 00:14:57.648 07:42:23 -- common/autotest_common.sh@940 -- # kill -0 71610 00:14:57.648 07:42:23 -- common/autotest_common.sh@941 -- # uname 00:14:57.648 07:42:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.648 07:42:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71610 00:14:57.648 killing process with pid 71610 00:14:57.648 Received shutdown signal, test time was about 2.000000 seconds 00:14:57.648 00:14:57.648 Latency(us) 00:14:57.648 [2024-12-02T07:42:23.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.648 [2024-12-02T07:42:23.272Z] =================================================================================================================== 00:14:57.648 [2024-12-02T07:42:23.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.648 07:42:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:57.648 07:42:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:57.648 07:42:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71610' 00:14:57.648 07:42:23 -- common/autotest_common.sh@955 -- # kill 71610 00:14:57.648 07:42:23 -- common/autotest_common.sh@960 -- # wait 71610 00:14:57.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:14:57.907 07:42:23 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:14:57.907 07:42:23 -- host/digest.sh@77 -- # local rw bs qd 00:14:57.907 07:42:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:14:57.907 07:42:23 -- host/digest.sh@80 -- # rw=randwrite 00:14:57.907 07:42:23 -- host/digest.sh@80 -- # bs=131072 00:14:57.907 07:42:23 -- host/digest.sh@80 -- # qd=16 00:14:57.907 07:42:23 -- host/digest.sh@82 -- # bperfpid=71668 00:14:57.907 07:42:23 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:14:57.907 07:42:23 -- host/digest.sh@83 -- # waitforlisten 71668 /var/tmp/bperf.sock 00:14:57.907 07:42:23 -- common/autotest_common.sh@829 -- # '[' -z 71668 ']' 00:14:57.907 07:42:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:14:57.907 07:42:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.907 07:42:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:14:57.907 07:42:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.907 07:42:23 -- common/autotest_common.sh@10 -- # set +x 00:14:57.907 [2024-12-02 07:42:23.444677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:57.907 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:57.907 Zero copy mechanism will not be used. 00:14:57.907 [2024-12-02 07:42:23.444947] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71668 ] 00:14:58.166 [2024-12-02 07:42:23.576737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.166 [2024-12-02 07:42:23.627156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.103 07:42:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.103 07:42:24 -- common/autotest_common.sh@862 -- # return 0 00:14:59.103 07:42:24 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:14:59.103 07:42:24 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:14:59.103 07:42:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:14:59.103 07:42:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:59.103 07:42:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:14:59.362 nvme0n1 00:14:59.362 07:42:24 -- host/digest.sh@91 -- # bperf_py perform_tests 00:14:59.362 07:42:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:14:59.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:14:59.623 Zero copy mechanism will not be used. 00:14:59.623 Running I/O for 2 seconds... 00:15:01.527 00:15:01.527 Latency(us) 00:15:01.527 [2024-12-02T07:42:27.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.527 [2024-12-02T07:42:27.151Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:15:01.527 nvme0n1 : 2.00 7585.32 948.16 0.00 0.00 2104.84 1593.72 4498.15 00:15:01.527 [2024-12-02T07:42:27.151Z] =================================================================================================================== 00:15:01.527 [2024-12-02T07:42:27.151Z] Total : 7585.32 948.16 0.00 0.00 2104.84 1593.72 4498.15 00:15:01.527 0 00:15:01.527 07:42:27 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:01.527 07:42:27 -- host/digest.sh@92 -- # get_accel_stats 00:15:01.527 07:42:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:01.527 | select(.opcode=="crc32c") 00:15:01.527 | "\(.module_name) \(.executed)"' 00:15:01.527 07:42:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:01.527 07:42:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:01.786 07:42:27 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:01.786 07:42:27 -- host/digest.sh@93 -- # exp_module=software 00:15:01.786 07:42:27 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:01.786 07:42:27 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:01.786 07:42:27 -- host/digest.sh@97 -- # killprocess 71668 00:15:01.786 07:42:27 -- common/autotest_common.sh@936 -- # '[' -z 71668 ']' 00:15:01.786 07:42:27 -- common/autotest_common.sh@940 -- # kill -0 71668 00:15:01.786 07:42:27 -- common/autotest_common.sh@941 -- # uname 00:15:01.786 07:42:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.786 07:42:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71668 00:15:01.786 killing process with pid 71668 00:15:01.786 Received shutdown signal, test time was about 2.000000 seconds 00:15:01.786 00:15:01.786 Latency(us) 00:15:01.786 [2024-12-02T07:42:27.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.786 [2024-12-02T07:42:27.410Z] =================================================================================================================== 00:15:01.786 [2024-12-02T07:42:27.410Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.786 07:42:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:01.786 07:42:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:01.786 07:42:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71668' 00:15:01.786 07:42:27 -- common/autotest_common.sh@955 -- # kill 71668 00:15:01.786 07:42:27 -- common/autotest_common.sh@960 -- # wait 71668 00:15:02.045 07:42:27 -- host/digest.sh@126 -- # killprocess 71487 00:15:02.045 07:42:27 -- common/autotest_common.sh@936 -- # '[' -z 71487 ']' 00:15:02.045 07:42:27 -- common/autotest_common.sh@940 -- # kill -0 71487 00:15:02.045 07:42:27 -- common/autotest_common.sh@941 -- # uname 00:15:02.045 07:42:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:02.045 07:42:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71487 00:15:02.045 07:42:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:02.045 killing process with pid 71487 00:15:02.045 07:42:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:02.045 07:42:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71487' 00:15:02.045 07:42:27 -- common/autotest_common.sh@955 -- # kill 71487 00:15:02.045 07:42:27 -- common/autotest_common.sh@960 -- # wait 71487 00:15:02.305 00:15:02.305 real 0m15.114s 00:15:02.305 user 0m29.231s 00:15:02.305 sys 0m4.302s 00:15:02.305 ************************************ 00:15:02.305 END TEST nvmf_digest_clean 00:15:02.305 ************************************ 00:15:02.305 07:42:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:02.305 07:42:27 -- common/autotest_common.sh@10 -- # set +x 00:15:02.305 07:42:27 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:15:02.305 07:42:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:02.305 07:42:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.305 07:42:27 -- common/autotest_common.sh@10 -- # set +x 00:15:02.305 ************************************ 00:15:02.305 START TEST nvmf_digest_error 00:15:02.305 ************************************ 00:15:02.305 07:42:27 -- common/autotest_common.sh@1114 -- # run_digest_error 00:15:02.305 07:42:27 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:15:02.305 07:42:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.305 07:42:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.305 07:42:27 -- common/autotest_common.sh@10 -- # set +x 00:15:02.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.305 07:42:27 -- nvmf/common.sh@469 -- # nvmfpid=71747 00:15:02.305 07:42:27 -- nvmf/common.sh@470 -- # waitforlisten 71747 00:15:02.305 07:42:27 -- common/autotest_common.sh@829 -- # '[' -z 71747 ']' 00:15:02.305 07:42:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:02.305 07:42:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.305 07:42:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.305 07:42:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.305 07:42:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.305 07:42:27 -- common/autotest_common.sh@10 -- # set +x 00:15:02.305 [2024-12-02 07:42:27.792675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:02.305 [2024-12-02 07:42:27.792766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.565 [2024-12-02 07:42:27.931218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.565 [2024-12-02 07:42:27.980606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.565 [2024-12-02 07:42:27.980763] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.565 [2024-12-02 07:42:27.980776] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.565 [2024-12-02 07:42:27.980783] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.565 [2024-12-02 07:42:27.980812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.565 07:42:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.565 07:42:28 -- common/autotest_common.sh@862 -- # return 0 00:15:02.565 07:42:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:02.565 07:42:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.565 07:42:28 -- common/autotest_common.sh@10 -- # set +x 00:15:02.565 07:42:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.565 07:42:28 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:15:02.565 07:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.565 07:42:28 -- common/autotest_common.sh@10 -- # set +x 00:15:02.565 [2024-12-02 07:42:28.053130] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:15:02.565 07:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.565 07:42:28 -- host/digest.sh@104 -- # common_target_config 00:15:02.565 07:42:28 -- host/digest.sh@43 -- # rpc_cmd 00:15:02.565 07:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.565 07:42:28 -- common/autotest_common.sh@10 -- # set +x 00:15:02.565 null0 00:15:02.565 [2024-12-02 07:42:28.120959] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.565 [2024-12-02 07:42:28.145062] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:02.565 07:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.565 07:42:28 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:15:02.565 07:42:28 -- host/digest.sh@54 -- # local rw bs qd 00:15:02.565 07:42:28 -- host/digest.sh@56 -- # rw=randread 00:15:02.565 07:42:28 -- host/digest.sh@56 -- # bs=4096 00:15:02.565 07:42:28 -- host/digest.sh@56 -- # qd=128 00:15:02.565 07:42:28 -- host/digest.sh@58 -- # bperfpid=71770 00:15:02.565 07:42:28 -- host/digest.sh@60 -- # waitforlisten 71770 /var/tmp/bperf.sock 00:15:02.565 07:42:28 -- common/autotest_common.sh@829 -- # '[' -z 71770 ']' 00:15:02.565 07:42:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:02.565 07:42:28 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:15:02.565 07:42:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.565 07:42:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:02.565 07:42:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.565 07:42:28 -- common/autotest_common.sh@10 -- # set +x 00:15:02.824 [2024-12-02 07:42:28.205328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:02.824 [2024-12-02 07:42:28.205590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71770 ] 00:15:02.824 [2024-12-02 07:42:28.341037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.824 [2024-12-02 07:42:28.395363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.762 07:42:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.762 07:42:29 -- common/autotest_common.sh@862 -- # return 0 00:15:03.762 07:42:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:03.762 07:42:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:03.762 07:42:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:03.762 07:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.762 07:42:29 -- common/autotest_common.sh@10 -- # set +x 00:15:03.762 07:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.762 07:42:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:03.762 07:42:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:04.021 nvme0n1 00:15:04.021 07:42:29 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:15:04.021 07:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.021 07:42:29 -- common/autotest_common.sh@10 -- # set +x 00:15:04.021 07:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.021 07:42:29 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:04.021 07:42:29 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:04.280 Running I/O for 2 seconds... 00:15:04.280 [2024-12-02 07:42:29.721758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.280 [2024-12-02 07:42:29.721819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.280 [2024-12-02 07:42:29.721834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.280 [2024-12-02 07:42:29.735749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.280 [2024-12-02 07:42:29.735784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.280 [2024-12-02 07:42:29.735812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.280 [2024-12-02 07:42:29.749494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.280 [2024-12-02 07:42:29.749528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.280 [2024-12-02 07:42:29.749556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.280 [2024-12-02 07:42:29.762927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.280 [2024-12-02 07:42:29.763136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.280 [2024-12-02 07:42:29.763168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.280 [2024-12-02 07:42:29.776714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.280 [2024-12-02 07:42:29.776750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.280 [2024-12-02 07:42:29.776778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.280 [2024-12-02 07:42:29.790310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.280 [2024-12-02 07:42:29.790510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.280 [2024-12-02 07:42:29.790543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.280 [2024-12-02 07:42:29.804346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.280 [2024-12-02 07:42:29.804570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.280 [2024-12-02 07:42:29.804702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.281 [2024-12-02 07:42:29.818537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.281 [2024-12-02 07:42:29.818744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.281 [2024-12-02 07:42:29.818870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.281 [2024-12-02 07:42:29.832854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.281 [2024-12-02 07:42:29.833055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.281 [2024-12-02 07:42:29.833199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.281 [2024-12-02 07:42:29.847120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.281 [2024-12-02 07:42:29.847347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.281 [2024-12-02 07:42:29.847542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.281 [2024-12-02 07:42:29.861523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.281 [2024-12-02 07:42:29.861725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.281 [2024-12-02 07:42:29.861864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.281 [2024-12-02 07:42:29.876059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.281 [2024-12-02 07:42:29.876266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.281 [2024-12-02 07:42:29.876420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.281 [2024-12-02 07:42:29.890831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.281 [2024-12-02 07:42:29.891021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.281 [2024-12-02 07:42:29.891160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:29.906746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:29.906989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:29.907139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:29.921187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:29.921407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:29.921625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:29.935350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:29.935572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:29.935754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:29.949707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:29.949903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:29.949920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:29.963665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:29.963716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:29.963744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:29.977251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:29.977287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:29.977343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:29.990987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:29.991170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:29.991203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:30.006066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:30.006244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:30.006263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:30.022477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:30.022533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:30.022563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:30.039633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:30.039672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:30.039702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:30.054292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:30.054346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:30.054360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:30.068175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:30.068208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.541 [2024-12-02 07:42:30.068237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.541 [2024-12-02 07:42:30.082027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.541 [2024-12-02 07:42:30.082077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.542 [2024-12-02 07:42:30.082105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.542 [2024-12-02 07:42:30.096041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.542 [2024-12-02 07:42:30.096075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.542 [2024-12-02 07:42:30.096102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.542 [2024-12-02 07:42:30.109822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.542 [2024-12-02 07:42:30.109855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.542 [2024-12-02 07:42:30.109883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.542 [2024-12-02 07:42:30.123598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.542 [2024-12-02 07:42:30.123632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.542 [2024-12-02 07:42:30.123659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.542 [2024-12-02 07:42:30.137272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.542 [2024-12-02 07:42:30.137330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.542 [2024-12-02 07:42:30.137360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.542 [2024-12-02 07:42:30.151176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.542 [2024-12-02 07:42:30.151374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.542 [2024-12-02 07:42:30.151407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.801 [2024-12-02 07:42:30.166090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.801 [2024-12-02 07:42:30.166163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.801 [2024-12-02 07:42:30.166194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.801 [2024-12-02 07:42:30.180204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.801 [2024-12-02 07:42:30.180238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.801 [2024-12-02 07:42:30.180266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.801 [2024-12-02 07:42:30.194003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.801 [2024-12-02 07:42:30.194037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.801 [2024-12-02 07:42:30.194065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.801 [2024-12-02 07:42:30.209559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.801 [2024-12-02 07:42:30.209617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.801 [2024-12-02 07:42:30.209630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.227905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.227952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.227963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.241555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.241600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.241611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.255357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.255403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.255414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.269098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.269144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.269155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.283619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.283665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.283678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.297611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.297657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.297668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.311911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.311956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.311967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.328015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.328061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.328078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.343287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.343358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.343370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.358029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.358075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.358087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.372927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.372972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.372984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.387952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.387998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.388009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.404139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.404187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.404198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:04.802 [2024-12-02 07:42:30.420939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:04.802 [2024-12-02 07:42:30.420996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.802 [2024-12-02 07:42:30.421009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.061 [2024-12-02 07:42:30.437257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.061 [2024-12-02 07:42:30.437303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.061 [2024-12-02 07:42:30.437324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.061 [2024-12-02 07:42:30.452097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.061 [2024-12-02 07:42:30.452143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.061 [2024-12-02 07:42:30.452155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.061 [2024-12-02 07:42:30.466811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.061 [2024-12-02 07:42:30.466856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.466867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.480873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.480917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.480929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.494928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.494972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.494983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.508610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.508655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.508666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.522247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.522293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.522304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.535865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.535909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.549552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.549598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.549609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.563077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.563121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.563132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.576644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.576688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.576698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.590300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.590353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.590364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.603999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.604043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.604054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.617755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.617798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.617809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.637384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.637429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.637441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.651109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.651153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.651164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.664729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.664773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.664784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.062 [2024-12-02 07:42:30.678521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.062 [2024-12-02 07:42:30.678581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.062 [2024-12-02 07:42:30.678592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.322 [2024-12-02 07:42:30.693469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.322 [2024-12-02 07:42:30.693513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.322 [2024-12-02 07:42:30.693524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.322 [2024-12-02 07:42:30.707377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.322 [2024-12-02 07:42:30.707421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.322 [2024-12-02 07:42:30.707432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.322 [2024-12-02 07:42:30.721218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.322 [2024-12-02 07:42:30.721264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.322 [2024-12-02 07:42:30.721275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.322 [2024-12-02 07:42:30.735048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.322 [2024-12-02 07:42:30.735093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.322 [2024-12-02 07:42:30.735104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.322 [2024-12-02 07:42:30.749346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.322 [2024-12-02 07:42:30.749391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.322 [2024-12-02 07:42:30.749402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.322 [2024-12-02 07:42:30.762958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.763002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.763013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.776638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.776683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.776694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.790430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.790476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.790502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.804144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.804172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.804198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.817780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.817824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.817835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.831523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.831568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.831598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.845184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.845228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.845239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.858900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.858944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.858954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.872693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.872738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.872751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.886393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.886439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.886450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.900181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.900226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.900236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.913876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.913920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.913931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.927529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.927574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.927585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.323 [2024-12-02 07:42:30.941573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.323 [2024-12-02 07:42:30.941635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.323 [2024-12-02 07:42:30.941647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:30.956456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:30.956502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:30.956512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:30.970112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:30.970181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:30.970209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:30.984036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:30.984081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:30.984092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:30.997837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:30.997882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:30.997892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.011563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.011607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.011618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.025123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.025169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.025180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.039040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.039085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.039096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.052989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.053035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.053046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.066657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.066702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.066712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.080328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.080372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.080383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.093961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.094006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.094017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.107741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.107785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.107795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.121437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.121481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.121491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.135130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.135175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.135186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.149019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.149077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.149088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.162771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.162816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.176426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.176469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.176480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.190079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.190124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.583 [2024-12-02 07:42:31.204492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.583 [2024-12-02 07:42:31.204552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.583 [2024-12-02 07:42:31.204564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.219101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.219149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.232944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.232989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.233002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.246651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.246694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.246704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.260418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.260462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.260473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.273952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.273997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.274007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.287676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.287721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.287749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.301381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.301424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.301435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.315085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.315129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.315141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.328759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.328803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.328814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.342409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.342454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.342465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.356480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.356522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.356533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.370306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.370363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.370376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.384166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.384211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.384222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.398097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.398148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.398193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.412216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.412262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.412272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.428142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.428172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.428183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.443976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.444031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:05.843 [2024-12-02 07:42:31.458636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:05.843 [2024-12-02 07:42:31.458695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.843 [2024-12-02 07:42:31.458706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.474489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.474534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.474546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.490396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.490442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.490454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.505385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.505431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.505442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.519878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.519922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.519933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.540851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.540897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.540908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.555360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.555405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.555416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.569699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.569744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.569755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.583976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.584022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.584033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.598458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.598519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.598530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.612895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.612940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.612952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.627544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.627587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.627598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.641354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.641398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.641409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.655186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.655230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.655241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.669016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.669063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.669073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.682858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.682904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.682915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 [2024-12-02 07:42:31.696573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1099d40) 00:15:06.103 [2024-12-02 07:42:31.696619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:06.103 [2024-12-02 07:42:31.696630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:06.103 00:15:06.103 Latency(us) 00:15:06.103 [2024-12-02T07:42:31.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.103 [2024-12-02T07:42:31.727Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:06.103 nvme0n1 : 2.00 17813.07 69.58 0.00 0.00 7181.23 6464.23 26452.71 00:15:06.103 [2024-12-02T07:42:31.727Z] =================================================================================================================== 00:15:06.103 [2024-12-02T07:42:31.727Z] Total : 17813.07 69.58 0.00 0.00 7181.23 6464.23 26452.71 00:15:06.103 0 00:15:06.362 07:42:31 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:06.362 07:42:31 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:06.362 07:42:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:06.363 07:42:31 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:06.363 | .driver_specific 00:15:06.363 | .nvme_error 00:15:06.363 | .status_code 00:15:06.363 | .command_transient_transport_error' 00:15:06.622 07:42:31 -- host/digest.sh@71 -- # (( 139 > 0 )) 00:15:06.622 07:42:31 -- host/digest.sh@73 -- # killprocess 71770 00:15:06.622 07:42:31 -- common/autotest_common.sh@936 -- # '[' -z 71770 ']' 00:15:06.622 07:42:31 -- common/autotest_common.sh@940 -- # kill -0 71770 00:15:06.622 07:42:32 -- common/autotest_common.sh@941 -- # uname 00:15:06.622 07:42:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.622 07:42:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71770 00:15:06.622 07:42:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:06.622 07:42:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:06.622 killing process with pid 71770 00:15:06.622 07:42:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71770' 00:15:06.622 07:42:32 -- common/autotest_common.sh@955 -- # kill 71770 00:15:06.622 Received shutdown signal, test time was about 2.000000 seconds 00:15:06.622 00:15:06.622 Latency(us) 00:15:06.622 [2024-12-02T07:42:32.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.622 [2024-12-02T07:42:32.246Z] =================================================================================================================== 00:15:06.622 [2024-12-02T07:42:32.246Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.622 07:42:32 -- common/autotest_common.sh@960 -- # wait 71770 00:15:06.622 07:42:32 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:15:06.622 07:42:32 -- host/digest.sh@54 -- # local rw bs qd 00:15:06.622 07:42:32 -- host/digest.sh@56 -- # rw=randread 00:15:06.622 07:42:32 -- host/digest.sh@56 -- # bs=131072 00:15:06.622 07:42:32 -- host/digest.sh@56 -- # qd=16 00:15:06.622 07:42:32 -- host/digest.sh@58 -- # bperfpid=71826 00:15:06.622 07:42:32 -- host/digest.sh@60 -- # waitforlisten 71826 /var/tmp/bperf.sock 00:15:06.622 07:42:32 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:15:06.622 07:42:32 -- common/autotest_common.sh@829 -- # '[' -z 71826 ']' 00:15:06.622 07:42:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:06.622 07:42:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:06.622 07:42:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:06.622 07:42:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.622 07:42:32 -- common/autotest_common.sh@10 -- # set +x 00:15:06.881 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:06.881 Zero copy mechanism will not be used. 00:15:06.881 [2024-12-02 07:42:32.246919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:06.881 [2024-12-02 07:42:32.247017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71826 ] 00:15:06.881 [2024-12-02 07:42:32.381275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.881 [2024-12-02 07:42:32.431857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.817 07:42:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.817 07:42:33 -- common/autotest_common.sh@862 -- # return 0 00:15:07.817 07:42:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:07.817 07:42:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:07.817 07:42:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:07.817 07:42:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.817 07:42:33 -- common/autotest_common.sh@10 -- # set +x 00:15:07.817 07:42:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.817 07:42:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:07.817 07:42:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:08.075 nvme0n1 00:15:08.335 07:42:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:15:08.335 07:42:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.335 07:42:33 -- common/autotest_common.sh@10 -- # set +x 00:15:08.335 07:42:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.335 07:42:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:08.335 07:42:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:08.335 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:08.335 Zero copy mechanism will not be used. 00:15:08.335 Running I/O for 2 seconds... 00:15:08.335 [2024-12-02 07:42:33.811626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.811700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.811714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.815713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.815760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.815772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.819699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.819749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.819761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.823755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.823802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.823815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.827618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.827666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.827677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.831414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.831463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.831475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.835310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.835371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.835384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.839390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.839437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.839450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.843225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.843272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.843283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.847083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.847131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.847143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.851258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.851306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.851330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.855237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.855286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.855298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.859112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.859160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.859172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.863074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.335 [2024-12-02 07:42:33.863105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.335 [2024-12-02 07:42:33.863116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.335 [2024-12-02 07:42:33.867150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.867197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.867209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.871077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.871124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.871136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.875141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.875189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.875201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.879394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.879441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.879453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.883216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.883263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.883274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.887054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.887101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.887114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.891204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.891269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.891281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.895185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.895232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.895243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.899129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.899176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.899188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.903054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.903101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.903113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.907139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.907186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.907199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.911075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.911121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.911133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.915023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.915070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.915082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.919120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.919169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.919182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.923030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.923077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.923088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.926888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.926935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.926947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.930819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.930866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.930878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.934992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.935053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.935065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.939031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.939077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.939088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.942956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.943001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.943013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.946911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.946956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.946968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.950731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.950776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.950788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.336 [2024-12-02 07:42:33.955057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.336 [2024-12-02 07:42:33.955105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.336 [2024-12-02 07:42:33.955118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.596 [2024-12-02 07:42:33.959458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.596 [2024-12-02 07:42:33.959505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.596 [2024-12-02 07:42:33.959517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.596 [2024-12-02 07:42:33.963586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.596 [2024-12-02 07:42:33.963632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.596 [2024-12-02 07:42:33.963644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.596 [2024-12-02 07:42:33.967375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.596 [2024-12-02 07:42:33.967420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.596 [2024-12-02 07:42:33.967431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.596 [2024-12-02 07:42:33.971165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.596 [2024-12-02 07:42:33.971212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.596 [2024-12-02 07:42:33.971223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.596 [2024-12-02 07:42:33.975080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.596 [2024-12-02 07:42:33.975125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:33.975137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:33.978884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:33.978931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:33.978942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:33.982779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:33.982826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:33.982837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:33.986510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:33.986574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:33.986585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:33.990370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:33.990419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:33.990432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:33.994141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:33.994227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:33.994239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:33.997967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:33.998013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:33.998025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.001692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.001738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.001750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.005492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.005539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.005550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.009214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.009261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.009272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.012912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.012958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.012970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.016761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.016818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.020507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.020553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.020565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.024320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.024366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.024377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.028107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.028153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.028165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.031950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.031996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.032008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.035793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.035839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.035850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.039566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.039612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.039623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.043332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.043388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.043401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.047079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.047127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.047139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.051007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.051053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.051065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.054823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.054870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.054881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.058633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.058695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.058706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.062466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.062515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.066317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.066376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.066389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.070262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.070321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.070336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.597 [2024-12-02 07:42:34.074252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.597 [2024-12-02 07:42:34.074302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.597 [2024-12-02 07:42:34.074330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.078031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.078077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.078088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.081861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.081907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.081919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.085572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.085617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.085628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.089318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.089363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.089375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.093172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.093218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.093229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.096910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.096956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.096968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.100758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.100803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.100815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.104605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.104650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.104662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.108418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.108464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.108476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.112234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.112281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.112292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.116062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.116109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.116121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.119987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.120048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.120060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.123834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.123881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.123892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.127589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.127634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.127646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.131457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.131502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.131513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.135176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.135222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.135233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.139018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.139064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.139076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.142835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.142880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.142892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.146729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.146774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.146785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.150445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.150492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.150505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.154124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.154194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.154221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.157902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.157947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.157959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.161712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.161757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.161769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.165492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.165537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.165549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.169190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.169236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.169247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.173269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.173327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.598 [2024-12-02 07:42:34.173341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.598 [2024-12-02 07:42:34.177039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.598 [2024-12-02 07:42:34.177085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.177097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.180833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.180879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.180891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.184626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.184672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.184683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.188471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.188517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.188528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.192386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.192431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.192443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.196214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.196260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.196272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.200039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.200084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.200096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.203893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.203939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.203967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.207768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.207813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.207825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.211645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.211691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.211702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.599 [2024-12-02 07:42:34.215855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.599 [2024-12-02 07:42:34.215902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.599 [2024-12-02 07:42:34.215914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.220113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.220160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.220171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.224227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.224274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.224287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.228168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.228213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.228225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.231973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.232019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.232031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.235818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.235863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.235874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.239637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.239683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.239694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.243456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.243501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.243513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.247276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.247334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.251066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.251112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.251124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.254949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.254995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.255006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.258977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.259025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.259037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.262953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.262999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.263011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.266956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.267005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.267017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.270923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.270970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.270982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.274948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.274995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.275006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.278845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.278892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.278904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.282824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.282871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.282882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.286763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.286810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.286822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.290599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.290644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.290656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.294420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.294468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.860 [2024-12-02 07:42:34.294496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.860 [2024-12-02 07:42:34.298245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.860 [2024-12-02 07:42:34.298292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.298303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.301970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.302016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.302028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.305797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.305843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.305855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.309600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.309646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.309658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.313337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.313382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.313393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.317113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.317159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.317170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.320894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.320940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.320952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.324702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.324748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.324759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.328464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.328509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.328521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.332279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.332337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.332349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.336029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.336075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.336087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.339902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.339949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.339961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.343662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.343708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.343719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.347512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.347557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.347568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.351251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.351296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.351320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.355107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.355153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.355164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.359286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.359344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.359357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.363219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.363266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.363278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.367027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.367073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.367084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.370914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.370960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.370972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.374898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.374944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.374955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.378844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.378889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.378901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.382732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.382777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.382788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.386491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.386568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.386580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.390216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.390266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.390280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.394047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.394092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.394103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.397933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.397979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.397991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.401755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.401801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.401813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.405645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.405690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.405701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.409412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.409457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.409468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.413189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.413235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.413247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.417016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.417062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.417074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.420847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.420894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.420906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.424720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.424767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.424779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.428509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.428556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.428567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.432349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.432394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.432405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.436120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.436165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.436177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.439982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.440028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.440040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.443848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.443894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.443905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.447645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.447690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.447702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.451523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.451570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.451583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.455438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.455482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.455494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.459129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.459174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.459186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.463037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.463082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.463094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.466907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.466953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.466964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.470725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.470770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.470781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.474721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.474767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.474778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:08.861 [2024-12-02 07:42:34.479280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:08.861 [2024-12-02 07:42:34.479352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.861 [2024-12-02 07:42:34.479364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.484157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.126 [2024-12-02 07:42:34.484210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.126 [2024-12-02 07:42:34.484224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.488825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.126 [2024-12-02 07:42:34.488875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.126 [2024-12-02 07:42:34.488889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.493418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.126 [2024-12-02 07:42:34.493454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.126 [2024-12-02 07:42:34.493468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.498349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.126 [2024-12-02 07:42:34.498385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.126 [2024-12-02 07:42:34.498398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.502819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.126 [2024-12-02 07:42:34.502867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.126 [2024-12-02 07:42:34.502879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.507181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.126 [2024-12-02 07:42:34.507228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.126 [2024-12-02 07:42:34.507240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.511596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.126 [2024-12-02 07:42:34.511643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.126 [2024-12-02 07:42:34.511672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.126 [2024-12-02 07:42:34.515853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.515900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.515912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.520091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.520139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.520151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.524366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.524414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.524426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.528321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.528366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.528377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.532123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.532169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.532181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.535975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.536021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.536032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.539832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.539878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.539889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.543710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.543756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.543768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.547489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.547534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.547546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.551272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.551328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.551341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.555127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.555173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.555185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.559012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.559057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.559069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.562818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.562863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.562874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.566685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.566730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.566741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.570570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.570615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.570626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.574315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.574372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.574385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.127 [2024-12-02 07:42:34.578064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.127 [2024-12-02 07:42:34.578109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.127 [2024-12-02 07:42:34.578121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.581926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.581972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.581983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.585715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.585760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.585771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.589462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.589507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.589519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.593299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.593354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.593382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.597048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.597093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.597105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.600814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.600871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.604676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.604722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.604733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.609102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.609149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.609161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.613217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.613264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.613276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.617074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.617132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.620894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.620939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.620951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.624734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.624780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.624792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.628494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.628539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.628550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.632325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.632369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.636164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.636210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.636222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.640082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.640127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.640139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.643923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.643969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.643980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.647839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.647885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.647897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.651655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.128 [2024-12-02 07:42:34.651700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.128 [2024-12-02 07:42:34.651711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.128 [2024-12-02 07:42:34.655540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.655585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.655597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.659336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.659391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.659402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.663225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.663272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.663284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.667173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.667220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.667231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.671206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.671253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.671265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.675089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.675134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.675146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.679023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.679070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.679082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.682920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.682966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.682977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.686834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.686878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.690726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.690772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.690783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.694636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.694681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.694692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.698376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.698423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.698434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.702232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.702279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.702291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.706028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.706074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.706085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.709980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.710026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.710038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.713821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.713868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.713880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.129 [2024-12-02 07:42:34.717668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.129 [2024-12-02 07:42:34.717699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.129 [2024-12-02 07:42:34.717710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.130 [2024-12-02 07:42:34.721443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.130 [2024-12-02 07:42:34.721489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.130 [2024-12-02 07:42:34.721501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.130 [2024-12-02 07:42:34.725146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.130 [2024-12-02 07:42:34.725192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.130 [2024-12-02 07:42:34.725204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.130 [2024-12-02 07:42:34.729001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.130 [2024-12-02 07:42:34.729047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.130 [2024-12-02 07:42:34.729058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.130 [2024-12-02 07:42:34.732778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.130 [2024-12-02 07:42:34.732824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.130 [2024-12-02 07:42:34.732835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.130 [2024-12-02 07:42:34.736558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.130 [2024-12-02 07:42:34.736603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.130 [2024-12-02 07:42:34.736614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.130 [2024-12-02 07:42:34.740305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.130 [2024-12-02 07:42:34.740349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.130 [2024-12-02 07:42:34.740361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.130 [2024-12-02 07:42:34.744960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.130 [2024-12-02 07:42:34.744996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.130 [2024-12-02 07:42:34.745010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.750069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.750119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.750175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.755140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.755208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.755222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.759277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.759331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.759343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.763152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.763198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.763210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.766991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.767037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.767048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.770959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.771004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.771016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.774781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.774826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.774837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.778666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.778711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.778723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.782442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.782489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.782531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.786200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.786233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.410 [2024-12-02 07:42:34.786245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.410 [2024-12-02 07:42:34.790138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.410 [2024-12-02 07:42:34.790210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.790224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.794094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.794140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.794176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.797955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.798001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.798013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.801797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.801842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.801854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.805616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.805664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.805691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.809508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.809554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.809565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.813863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.813911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.813923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.817849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.817919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.817931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.822143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.822201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.822213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.825993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.826040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.826052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.829919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.829966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.829978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.833808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.833855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.833866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.837826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.837872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.837884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.841610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.841657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.841668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.845388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.845434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.845445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.849200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.849246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.849258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.853033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.853079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.853091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.856929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.856975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.856987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.860714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.860760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.860771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.864547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.864594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.864606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.868353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.868397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.868409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.872352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.872398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.872410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.876240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.876287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.876298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.880174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.880220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.880232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.884214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.884261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.884272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.888188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.888235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.888247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.892259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.892306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.892344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.896310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.896356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.896368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.900198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.900244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.900256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.904059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.904106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.904117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.907863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.907909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.907920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.911697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.911743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.911754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.915508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.915554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.915565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.919342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.919397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.919409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.923309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.923363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.923375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.927122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.927168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.927181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.931105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.931151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.931162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.934933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.934980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.934992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.939016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.939063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.939075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.943374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.943452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.943467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.947792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.947839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.947851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.952138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.952185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.952197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.956363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.956423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.956435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.960372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.960418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.960430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.964451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.964497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.964509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.968416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.968462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.968474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.972362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.972408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.972420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.976406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.411 [2024-12-02 07:42:34.976452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.411 [2024-12-02 07:42:34.976463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.411 [2024-12-02 07:42:34.980189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:34.980236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:34.980248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:34.984130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:34.984177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:34.984189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:34.988245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:34.988291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:34.988303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:34.992093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:34.992141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:34.992153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:34.996072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:34.996118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:34.996130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:34.999956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:35.000005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:35.000018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:35.004037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:35.004084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:35.004095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:35.007924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:35.007971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:35.007982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:35.011777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:35.011825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:35.011836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.412 [2024-12-02 07:42:35.016614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.412 [2024-12-02 07:42:35.016679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.412 [2024-12-02 07:42:35.016693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.677 [2024-12-02 07:42:35.021519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.677 [2024-12-02 07:42:35.021556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.677 [2024-12-02 07:42:35.021569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.677 [2024-12-02 07:42:35.026283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.677 [2024-12-02 07:42:35.026329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.677 [2024-12-02 07:42:35.026343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.677 [2024-12-02 07:42:35.030602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.677 [2024-12-02 07:42:35.030649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.677 [2024-12-02 07:42:35.030662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.677 [2024-12-02 07:42:35.034598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.677 [2024-12-02 07:42:35.034646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.677 [2024-12-02 07:42:35.034658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.677 [2024-12-02 07:42:35.039014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.677 [2024-12-02 07:42:35.039061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.039073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.043115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.043179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.043207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.047092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.047138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.047150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.051037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.051084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.051096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.054911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.054957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.054969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.059013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.059060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.059072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.062983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.063030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.063042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.066862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.066909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.066921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.070840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.070887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.070900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.074804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.074851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.074862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.078647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.078693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.078706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.082460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.082523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.082535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.086532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.086580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.086591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.090382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.090413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.090424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.094300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.094356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.094369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.098454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.098531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.098543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.102464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.102544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.102555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.106467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.106546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.106557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.110396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.110444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.110456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.114495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.114542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.114554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.118244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.118291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.118303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.122013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.122060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.122071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.125795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.125840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.125851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.129572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.129618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.129629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.133386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.133432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.133443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.137211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.137258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.137269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.141029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.141075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.141086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.144874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.144920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.144932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.148651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.148697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.148724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.152460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.152506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.152518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.156344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.156391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.156403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.160217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.160263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.160274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.164142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.164189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.164201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.167989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.168035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.168046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.171785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.171831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.171842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.175555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.175600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.175611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.179307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.179362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.179374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.183110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.183156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.183167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.678 [2024-12-02 07:42:35.186933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.678 [2024-12-02 07:42:35.186978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.678 [2024-12-02 07:42:35.186990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.190770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.190815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.190826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.194678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.194724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.194735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.198549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.198593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.198605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.202339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.202370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.202381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.206038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.206084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.206095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.209789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.209834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.209845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.213571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.213617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.213628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.217390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.217435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.217446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.221175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.221221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.221232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.224908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.224953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.224964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.228638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.228683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.228694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.232381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.232425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.232436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.236252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.236316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.236340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.240999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.241049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.241078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.245161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.245208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.245220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.249112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.249159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.249171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.253036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.253082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.253094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.256838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.256884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.256895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.260612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.260656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.260667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.264338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.264383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.264394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.268111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.268157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.268168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.271980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.272026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.272037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.275730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.275776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.275787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.279645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.279690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.279701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.283375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.283420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.283432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.287143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.287189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.287200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.291052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.291098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.291109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.679 [2024-12-02 07:42:35.294951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.679 [2024-12-02 07:42:35.294998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.679 [2024-12-02 07:42:35.295026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.299482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.299530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.299541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.303528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.303591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.303618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.307627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.307672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.307683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.311535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.311582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.311593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.315383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.315429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.315441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.319226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.319273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.319285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.323182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.323228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.323240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.327033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.327081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.327093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.330922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.330967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.330979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.334756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.334802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.334814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.338535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.338598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.338609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.342321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.342377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.342388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.345950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.345996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.346007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.349709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.349755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.349766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.353454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.353499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.353511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.357204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.939 [2024-12-02 07:42:35.357249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.939 [2024-12-02 07:42:35.357260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.939 [2024-12-02 07:42:35.360959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.361005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.361017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.364765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.364811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.368514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.368559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.368571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.372267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.372322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.372335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.376067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.376113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.376125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.379866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.379912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.379923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.383629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.383675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.383686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.387266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.387323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.387336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.391050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.391096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.391107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.394984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.395031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.395043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.398821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.398866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.398877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.402651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.402712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.402724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.406379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.406411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.406423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.410143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.410215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.410227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.413979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.414026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.414037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.417738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.417783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.417795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.421517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.421562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.421573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.425233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.425279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.425291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.428994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.429040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.429051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.432789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.432834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.432846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.436523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.436569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.436580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.440265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.440320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.440334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.444095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.444140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.444151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.447951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.447997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.448009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.451745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.451791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.451802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.455531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.940 [2024-12-02 07:42:35.455577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.940 [2024-12-02 07:42:35.455589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.940 [2024-12-02 07:42:35.459308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.459365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.459377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.463243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.463288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.463299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.467001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.467047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.467059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.470855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.470901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.470912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.474712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.474757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.474769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.478378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.478425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.478437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.482293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.482351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.482364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.486011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.486056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.486067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.489997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.490044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.490055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.493949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.493995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.494007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.498074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.498121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.498133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.502285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.502343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.502357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.506703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.506750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.506762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.510974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.511022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.511034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.515470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.515519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.515532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.519776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.519823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.519835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.523909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.523956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.523967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.527895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.527941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.527952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.531887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.531933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.531945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.535923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.535969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.535980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.539901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.539947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.539959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.543738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.543785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.543796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.547603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.547650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.547661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.551381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.551426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.551438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.555186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.555232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.555244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:09.941 [2024-12-02 07:42:35.559571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:09.941 [2024-12-02 07:42:35.559633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:09.941 [2024-12-02 07:42:35.559660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.201 [2024-12-02 07:42:35.563795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.201 [2024-12-02 07:42:35.563841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.563853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.567985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.568031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.568042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.571836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.571881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.571893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.575737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.575783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.575794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.579574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.579620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.579632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.583383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.583429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.583441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.587215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.587261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.587272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.590978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.591023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.591036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.594990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.595036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.595048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.598851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.598896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.598908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.602712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.602757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.602768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.606687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.606734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.606745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.610661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.610706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.610718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.614664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.614709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.614720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.618454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.618517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.622381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.622430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.622442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.626113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.626201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.626214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.629968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.630014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.630026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.633853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.633901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.633913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.637685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.637732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.637743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.641661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.641708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.641720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.645570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.645617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.645629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.649463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.649508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.649519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.653236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.653284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.653295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.202 [2024-12-02 07:42:35.656984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.202 [2024-12-02 07:42:35.657030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.202 [2024-12-02 07:42:35.657041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.660897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.660943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.660954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.664746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.664792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.664803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.668575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.668620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.668632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.672411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.672457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.672468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.676256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.676303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.676327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.680074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.680120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.683874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.683921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.683932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.687792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.687839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.691624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.691670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.691682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.695389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.695436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.695447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.699112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.699158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.699170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.703076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.703121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.703132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.706867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.706913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.706925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.710776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.710822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.710834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.714668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.714714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.714725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.718619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.718664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.718676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.722426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.722459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.722471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.726315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.726375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.726405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.730145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.730238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.730251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.733993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.734039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.734050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.737936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.737982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.737993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.741720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.741766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.741777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.745519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.745564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.745575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.749302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.749359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.749371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.753061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.753106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.753118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.756936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.203 [2024-12-02 07:42:35.756982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.203 [2024-12-02 07:42:35.756993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.203 [2024-12-02 07:42:35.760810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.760856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.760869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.764601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.764648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.764659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.768435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.768482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.768494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.772215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.772261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.772272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.776019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.776065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.776077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.779790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.779836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.779848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.783602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.783648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.783659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.787366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.787410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.787421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.791121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.791166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.791178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.795042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.795088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.795100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.798872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.798917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.798929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:10.204 [2024-12-02 07:42:35.802639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1797940) 00:15:10.204 [2024-12-02 07:42:35.802684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:10.204 [2024-12-02 07:42:35.802696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:10.204 00:15:10.204 Latency(us) 00:15:10.204 [2024-12-02T07:42:35.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.204 [2024-12-02T07:42:35.828Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:15:10.204 nvme0n1 : 2.00 7899.90 987.49 0.00 0.00 2022.63 1638.40 8221.79 00:15:10.204 [2024-12-02T07:42:35.828Z] =================================================================================================================== 00:15:10.204 [2024-12-02T07:42:35.828Z] Total : 7899.90 987.49 0.00 0.00 2022.63 1638.40 8221.79 00:15:10.204 0 00:15:10.462 07:42:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:10.462 07:42:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:10.462 07:42:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:10.462 07:42:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:10.462 | .driver_specific 00:15:10.462 | .nvme_error 00:15:10.462 | .status_code 00:15:10.462 | .command_transient_transport_error' 00:15:10.721 07:42:36 -- host/digest.sh@71 -- # (( 510 > 0 )) 00:15:10.721 07:42:36 -- host/digest.sh@73 -- # killprocess 71826 00:15:10.721 07:42:36 -- common/autotest_common.sh@936 -- # '[' -z 71826 ']' 00:15:10.721 07:42:36 -- common/autotest_common.sh@940 -- # kill -0 71826 00:15:10.721 07:42:36 -- common/autotest_common.sh@941 -- # uname 00:15:10.721 07:42:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.721 07:42:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71826 00:15:10.721 07:42:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:10.721 07:42:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:10.721 killing process with pid 71826 00:15:10.721 07:42:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71826' 00:15:10.721 07:42:36 -- common/autotest_common.sh@955 -- # kill 71826 00:15:10.722 Received shutdown signal, test time was about 2.000000 seconds 00:15:10.722 00:15:10.722 Latency(us) 00:15:10.722 [2024-12-02T07:42:36.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.722 [2024-12-02T07:42:36.346Z] =================================================================================================================== 00:15:10.722 [2024-12-02T07:42:36.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.722 07:42:36 -- common/autotest_common.sh@960 -- # wait 71826 00:15:10.722 07:42:36 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:15:10.722 07:42:36 -- host/digest.sh@54 -- # local rw bs qd 00:15:10.722 07:42:36 -- host/digest.sh@56 -- # rw=randwrite 00:15:10.722 07:42:36 -- host/digest.sh@56 -- # bs=4096 00:15:10.722 07:42:36 -- host/digest.sh@56 -- # qd=128 00:15:10.722 07:42:36 -- host/digest.sh@58 -- # bperfpid=71886 00:15:10.722 07:42:36 -- host/digest.sh@60 -- # waitforlisten 71886 /var/tmp/bperf.sock 00:15:10.722 07:42:36 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:15:10.722 07:42:36 -- common/autotest_common.sh@829 -- # '[' -z 71886 ']' 00:15:10.722 07:42:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:10.722 07:42:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:10.722 07:42:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:10.722 07:42:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.722 07:42:36 -- common/autotest_common.sh@10 -- # set +x 00:15:10.981 [2024-12-02 07:42:36.372730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.981 [2024-12-02 07:42:36.372826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71886 ] 00:15:10.981 [2024-12-02 07:42:36.501582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.981 [2024-12-02 07:42:36.553989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.918 07:42:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.918 07:42:37 -- common/autotest_common.sh@862 -- # return 0 00:15:11.918 07:42:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:11.918 07:42:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:12.177 07:42:37 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:12.177 07:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.177 07:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:12.177 07:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.177 07:42:37 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:12.177 07:42:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:12.436 nvme0n1 00:15:12.436 07:42:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:15:12.436 07:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.436 07:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:12.436 07:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.436 07:42:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:12.436 07:42:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:12.436 Running I/O for 2 seconds... 00:15:12.436 [2024-12-02 07:42:38.000864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ddc00 00:15:12.436 [2024-12-02 07:42:38.002151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.436 [2024-12-02 07:42:38.002248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:12.436 [2024-12-02 07:42:38.014813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fef90 00:15:12.436 [2024-12-02 07:42:38.016098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.436 [2024-12-02 07:42:38.016147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.436 [2024-12-02 07:42:38.027957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ff3c8 00:15:12.436 [2024-12-02 07:42:38.029198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.436 [2024-12-02 07:42:38.029247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:12.436 [2024-12-02 07:42:38.041038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190feb58 00:15:12.436 [2024-12-02 07:42:38.042397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.436 [2024-12-02 07:42:38.042448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:12.436 [2024-12-02 07:42:38.054181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fe720 00:15:12.436 [2024-12-02 07:42:38.055650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.436 [2024-12-02 07:42:38.055700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.068768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fe2e8 00:15:12.696 [2024-12-02 07:42:38.069987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.070034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.081971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fdeb0 00:15:12.696 [2024-12-02 07:42:38.083268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.083337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.095133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fda78 00:15:12.696 [2024-12-02 07:42:38.096425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.096499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.108286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fd640 00:15:12.696 [2024-12-02 07:42:38.109490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.109537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.121405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fd208 00:15:12.696 [2024-12-02 07:42:38.122656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.122705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.134510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fcdd0 00:15:12.696 [2024-12-02 07:42:38.135730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.135778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.147674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fc998 00:15:12.696 [2024-12-02 07:42:38.148861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.148908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.160803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fc560 00:15:12.696 [2024-12-02 07:42:38.161967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.162013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.173876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fc128 00:15:12.696 [2024-12-02 07:42:38.175085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.175133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:12.696 [2024-12-02 07:42:38.187078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fbcf0 00:15:12.696 [2024-12-02 07:42:38.188245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.696 [2024-12-02 07:42:38.188290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.200333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fb8b8 00:15:12.697 [2024-12-02 07:42:38.201491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.201541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.213378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fb480 00:15:12.697 [2024-12-02 07:42:38.214608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.214655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.226554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fb048 00:15:12.697 [2024-12-02 07:42:38.227766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.227814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.239709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fac10 00:15:12.697 [2024-12-02 07:42:38.240838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.240887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.252915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fa7d8 00:15:12.697 [2024-12-02 07:42:38.254026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.254073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.266338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190fa3a0 00:15:12.697 [2024-12-02 07:42:38.267540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.267589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.279652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f9f68 00:15:12.697 [2024-12-02 07:42:38.280792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.280839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.292902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f9b30 00:15:12.697 [2024-12-02 07:42:38.293986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.294033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:12.697 [2024-12-02 07:42:38.306419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f96f8 00:15:12.697 [2024-12-02 07:42:38.307534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.697 [2024-12-02 07:42:38.307581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.320429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f92c0 00:15:12.957 [2024-12-02 07:42:38.321699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.321765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.333933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f8e88 00:15:12.957 [2024-12-02 07:42:38.335044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.335092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.347135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f8a50 00:15:12.957 [2024-12-02 07:42:38.348197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.348244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.360259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f8618 00:15:12.957 [2024-12-02 07:42:38.361337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.361390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.373893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f81e0 00:15:12.957 [2024-12-02 07:42:38.375024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.375072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.389275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f7da8 00:15:12.957 [2024-12-02 07:42:38.390332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.403719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f7970 00:15:12.957 [2024-12-02 07:42:38.404752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.404783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.417543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f7538 00:15:12.957 [2024-12-02 07:42:38.418590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.418621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.431472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f7100 00:15:12.957 [2024-12-02 07:42:38.432470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.432500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.445235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f6cc8 00:15:12.957 [2024-12-02 07:42:38.446272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.446309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.459295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f6890 00:15:12.957 [2024-12-02 07:42:38.460275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.460314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.473333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f6458 00:15:12.957 [2024-12-02 07:42:38.474346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.474380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.487476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f6020 00:15:12.957 [2024-12-02 07:42:38.488433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.488464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.501320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f5be8 00:15:12.957 [2024-12-02 07:42:38.502310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.502349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.515191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f57b0 00:15:12.957 [2024-12-02 07:42:38.516146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.516175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.529058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f5378 00:15:12.957 [2024-12-02 07:42:38.530052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.530099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.542377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f4f40 00:15:12.957 [2024-12-02 07:42:38.543352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.543419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.555952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f4b08 00:15:12.957 [2024-12-02 07:42:38.556922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.556956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:12.957 [2024-12-02 07:42:38.570761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f46d0 00:15:12.957 [2024-12-02 07:42:38.571802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:12.957 [2024-12-02 07:42:38.571851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.586847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f4298 00:15:13.218 [2024-12-02 07:42:38.587864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.587911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.600797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f3e60 00:15:13.218 [2024-12-02 07:42:38.601717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.601766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.613692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f3a28 00:15:13.218 [2024-12-02 07:42:38.614735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.614782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.626810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f35f0 00:15:13.218 [2024-12-02 07:42:38.627712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.627760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.639796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f31b8 00:15:13.218 [2024-12-02 07:42:38.640693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.640742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.652799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f2d80 00:15:13.218 [2024-12-02 07:42:38.653686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.653734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.665725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f2948 00:15:13.218 [2024-12-02 07:42:38.666647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.666679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.678675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f2510 00:15:13.218 [2024-12-02 07:42:38.679550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.679597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.691648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f20d8 00:15:13.218 [2024-12-02 07:42:38.692512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.692559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.704668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f1ca0 00:15:13.218 [2024-12-02 07:42:38.705527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.705574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.717662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f1868 00:15:13.218 [2024-12-02 07:42:38.718551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.718598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.730718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f1430 00:15:13.218 [2024-12-02 07:42:38.731566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.731614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.743727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f0ff8 00:15:13.218 [2024-12-02 07:42:38.744531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.744580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.756906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f0bc0 00:15:13.218 [2024-12-02 07:42:38.757720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.757770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.769790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f0788 00:15:13.218 [2024-12-02 07:42:38.770617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.770683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.782771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190f0350 00:15:13.218 [2024-12-02 07:42:38.783555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.783619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.795691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190eff18 00:15:13.218 [2024-12-02 07:42:38.796497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.796562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.808701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190efae0 00:15:13.218 [2024-12-02 07:42:38.809480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.809529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.821646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ef6a8 00:15:13.218 [2024-12-02 07:42:38.822420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.822485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:13.218 [2024-12-02 07:42:38.834756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ef270 00:15:13.218 [2024-12-02 07:42:38.835553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.218 [2024-12-02 07:42:38.835618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:13.478 [2024-12-02 07:42:38.849114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190eee38 00:15:13.478 [2024-12-02 07:42:38.849859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.478 [2024-12-02 07:42:38.849908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:13.478 [2024-12-02 07:42:38.862140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190eea00 00:15:13.478 [2024-12-02 07:42:38.862919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.478 [2024-12-02 07:42:38.862969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:13.478 [2024-12-02 07:42:38.875784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ee5c8 00:15:13.478 [2024-12-02 07:42:38.876525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.478 [2024-12-02 07:42:38.876574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:13.478 [2024-12-02 07:42:38.889373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ee190 00:15:13.478 [2024-12-02 07:42:38.890099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.478 [2024-12-02 07:42:38.890148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:13.478 [2024-12-02 07:42:38.903180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190edd58 00:15:13.478 [2024-12-02 07:42:38.903889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.478 [2024-12-02 07:42:38.903937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:13.478 [2024-12-02 07:42:38.916333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ed920 00:15:13.478 [2024-12-02 07:42:38.917023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.478 [2024-12-02 07:42:38.917072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:13.478 [2024-12-02 07:42:38.929352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ed4e8 00:15:13.478 [2024-12-02 07:42:38.930034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:38.930084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:38.942331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ed0b0 00:15:13.479 [2024-12-02 07:42:38.943028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:38.943075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:38.955413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ecc78 00:15:13.479 [2024-12-02 07:42:38.956054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:38.956134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:38.968396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ec840 00:15:13.479 [2024-12-02 07:42:38.969023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:38.969101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:38.981379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ec408 00:15:13.479 [2024-12-02 07:42:38.981985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:38.982063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:38.994315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ebfd0 00:15:13.479 [2024-12-02 07:42:38.995005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:38.995053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.007361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ebb98 00:15:13.479 [2024-12-02 07:42:39.007949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.007979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.020602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190eb760 00:15:13.479 [2024-12-02 07:42:39.021174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.021209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.033775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190eb328 00:15:13.479 [2024-12-02 07:42:39.034385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.034466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.046835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190eaef0 00:15:13.479 [2024-12-02 07:42:39.047412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.047446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.059797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190eaab8 00:15:13.479 [2024-12-02 07:42:39.060352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.060408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.072685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ea680 00:15:13.479 [2024-12-02 07:42:39.073224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.073258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.085630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190ea248 00:15:13.479 [2024-12-02 07:42:39.086187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.086237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:13.479 [2024-12-02 07:42:39.099196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e9e10 00:15:13.479 [2024-12-02 07:42:39.099738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.479 [2024-12-02 07:42:39.099775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.113318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e99d8 00:15:13.743 [2024-12-02 07:42:39.113827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.113862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.126209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e95a0 00:15:13.743 [2024-12-02 07:42:39.126766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.126815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.139278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e9168 00:15:13.743 [2024-12-02 07:42:39.139786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.139821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.152235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e8d30 00:15:13.743 [2024-12-02 07:42:39.152737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.152772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.165346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e88f8 00:15:13.743 [2024-12-02 07:42:39.165846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.165884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.178597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e84c0 00:15:13.743 [2024-12-02 07:42:39.179069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.179104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.191831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e8088 00:15:13.743 [2024-12-02 07:42:39.192286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.192331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.204861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e7c50 00:15:13.743 [2024-12-02 07:42:39.205313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.205357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.217887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e7818 00:15:13.743 [2024-12-02 07:42:39.218361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.218398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.231060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e73e0 00:15:13.743 [2024-12-02 07:42:39.231523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.231559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.244079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e6fa8 00:15:13.743 [2024-12-02 07:42:39.244519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.244554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.257093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e6b70 00:15:13.743 [2024-12-02 07:42:39.257523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.257558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.270275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e6738 00:15:13.743 [2024-12-02 07:42:39.270702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.270739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.283290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e6300 00:15:13.743 [2024-12-02 07:42:39.283745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.283780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.296503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e5ec8 00:15:13.743 [2024-12-02 07:42:39.296887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.296938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.309416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e5a90 00:15:13.743 [2024-12-02 07:42:39.309793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.309828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.322336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e5658 00:15:13.743 [2024-12-02 07:42:39.322786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.322821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.335274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e5220 00:15:13.743 [2024-12-02 07:42:39.335671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.335707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.348273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e4de8 00:15:13.743 [2024-12-02 07:42:39.348636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.348673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:13.743 [2024-12-02 07:42:39.361897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e49b0 00:15:13.743 [2024-12-02 07:42:39.362256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.743 [2024-12-02 07:42:39.362293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.376441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e4578 00:15:14.003 [2024-12-02 07:42:39.376768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.376803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.389455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e4140 00:15:14.003 [2024-12-02 07:42:39.389779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.389809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.402736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e3d08 00:15:14.003 [2024-12-02 07:42:39.403051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.403082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.415911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e38d0 00:15:14.003 [2024-12-02 07:42:39.416221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.416256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.429210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e3498 00:15:14.003 [2024-12-02 07:42:39.429524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.429555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.442243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e3060 00:15:14.003 [2024-12-02 07:42:39.442552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.442589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.455427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e2c28 00:15:14.003 [2024-12-02 07:42:39.455664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.455716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.468747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e27f0 00:15:14.003 [2024-12-02 07:42:39.468986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.469043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.481714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e23b8 00:15:14.003 [2024-12-02 07:42:39.481930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.481970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.494732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e1f80 00:15:14.003 [2024-12-02 07:42:39.494944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.494965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.507639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e1b48 00:15:14.003 [2024-12-02 07:42:39.507840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.520642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e1710 00:15:14.003 [2024-12-02 07:42:39.520853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.520873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.535249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e12d8 00:15:14.003 [2024-12-02 07:42:39.535495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.535517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.549886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e0ea0 00:15:14.003 [2024-12-02 07:42:39.550049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.550069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.564215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e0a68 00:15:14.003 [2024-12-02 07:42:39.564443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.564470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.578709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e0630 00:15:14.003 [2024-12-02 07:42:39.578903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.578931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.594449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e01f8 00:15:14.003 [2024-12-02 07:42:39.594662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.594690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.609964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190dfdc0 00:15:14.003 [2024-12-02 07:42:39.610110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.003 [2024-12-02 07:42:39.610192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:14.003 [2024-12-02 07:42:39.624753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190df988 00:15:14.003 [2024-12-02 07:42:39.624916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.624990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.639304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190df550 00:15:14.263 [2024-12-02 07:42:39.639459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.639518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.653614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190df118 00:15:14.263 [2024-12-02 07:42:39.653717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.653753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.667408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190dece0 00:15:14.263 [2024-12-02 07:42:39.667501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.667522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.681133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190de8a8 00:15:14.263 [2024-12-02 07:42:39.681236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.681257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.694212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190de038 00:15:14.263 [2024-12-02 07:42:39.694312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.694334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.712693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190de038 00:15:14.263 [2024-12-02 07:42:39.714002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.714051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.726005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190de470 00:15:14.263 [2024-12-02 07:42:39.727283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.727356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.739308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190de8a8 00:15:14.263 [2024-12-02 07:42:39.740556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.740604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.752331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190dece0 00:15:14.263 [2024-12-02 07:42:39.753589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.753637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.765393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190df118 00:15:14.263 [2024-12-02 07:42:39.766747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.766794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.778710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190df550 00:15:14.263 [2024-12-02 07:42:39.779914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.779961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.791840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190df988 00:15:14.263 [2024-12-02 07:42:39.793059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.793107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.805007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190dfdc0 00:15:14.263 [2024-12-02 07:42:39.806282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.806340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.818078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e01f8 00:15:14.263 [2024-12-02 07:42:39.819328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.819400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.831410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e0630 00:15:14.263 [2024-12-02 07:42:39.832631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.832677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.844552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e0a68 00:15:14.263 [2024-12-02 07:42:39.845778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.845825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.857583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e0ea0 00:15:14.263 [2024-12-02 07:42:39.858840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.263 [2024-12-02 07:42:39.858887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:14.263 [2024-12-02 07:42:39.870921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e12d8 00:15:14.264 [2024-12-02 07:42:39.872089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.264 [2024-12-02 07:42:39.872137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:14.264 [2024-12-02 07:42:39.884814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e1710 00:15:14.522 [2024-12-02 07:42:39.886304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.522 [2024-12-02 07:42:39.886364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:14.522 [2024-12-02 07:42:39.899010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e1b48 00:15:14.522 [2024-12-02 07:42:39.900159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.522 [2024-12-02 07:42:39.900206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:14.522 [2024-12-02 07:42:39.912234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e1f80 00:15:14.522 [2024-12-02 07:42:39.913390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.522 [2024-12-02 07:42:39.913447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:14.522 [2024-12-02 07:42:39.925354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e23b8 00:15:14.523 [2024-12-02 07:42:39.926572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.523 [2024-12-02 07:42:39.926619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:14.523 [2024-12-02 07:42:39.938516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e27f0 00:15:14.523 [2024-12-02 07:42:39.939675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.523 [2024-12-02 07:42:39.939723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:14.523 [2024-12-02 07:42:39.951483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e2c28 00:15:14.523 [2024-12-02 07:42:39.952601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.523 [2024-12-02 07:42:39.952649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:14.523 [2024-12-02 07:42:39.964468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e3060 00:15:14.523 [2024-12-02 07:42:39.965577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.523 [2024-12-02 07:42:39.965623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:14.523 [2024-12-02 07:42:39.977412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa05dc0) with pdu=0x2000190e3498 00:15:14.523 [2024-12-02 07:42:39.978579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.523 [2024-12-02 07:42:39.978627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:14.523 00:15:14.523 Latency(us) 00:15:14.523 [2024-12-02T07:42:40.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.523 [2024-12-02T07:42:40.147Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.523 nvme0n1 : 2.00 18956.17 74.05 0.00 0.00 6746.89 5213.09 19184.17 00:15:14.523 [2024-12-02T07:42:40.147Z] =================================================================================================================== 00:15:14.523 [2024-12-02T07:42:40.147Z] Total : 18956.17 74.05 0.00 0.00 6746.89 5213.09 19184.17 00:15:14.523 0 00:15:14.523 07:42:40 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:14.523 07:42:40 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:14.523 07:42:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:14.523 07:42:40 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:14.523 | .driver_specific 00:15:14.523 | .nvme_error 00:15:14.523 | .status_code 00:15:14.523 | .command_transient_transport_error' 00:15:14.782 07:42:40 -- host/digest.sh@71 -- # (( 148 > 0 )) 00:15:14.782 07:42:40 -- host/digest.sh@73 -- # killprocess 71886 00:15:14.782 07:42:40 -- common/autotest_common.sh@936 -- # '[' -z 71886 ']' 00:15:14.782 07:42:40 -- common/autotest_common.sh@940 -- # kill -0 71886 00:15:14.782 07:42:40 -- common/autotest_common.sh@941 -- # uname 00:15:14.782 07:42:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.782 07:42:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71886 00:15:14.782 07:42:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:14.782 07:42:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:14.782 killing process with pid 71886 00:15:14.782 07:42:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71886' 00:15:14.782 Received shutdown signal, test time was about 2.000000 seconds 00:15:14.782 00:15:14.782 Latency(us) 00:15:14.782 [2024-12-02T07:42:40.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.782 [2024-12-02T07:42:40.406Z] =================================================================================================================== 00:15:14.782 [2024-12-02T07:42:40.406Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.782 07:42:40 -- common/autotest_common.sh@955 -- # kill 71886 00:15:14.782 07:42:40 -- common/autotest_common.sh@960 -- # wait 71886 00:15:15.041 07:42:40 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:15:15.041 07:42:40 -- host/digest.sh@54 -- # local rw bs qd 00:15:15.041 07:42:40 -- host/digest.sh@56 -- # rw=randwrite 00:15:15.041 07:42:40 -- host/digest.sh@56 -- # bs=131072 00:15:15.041 07:42:40 -- host/digest.sh@56 -- # qd=16 00:15:15.041 07:42:40 -- host/digest.sh@58 -- # bperfpid=71941 00:15:15.041 07:42:40 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:15:15.041 07:42:40 -- host/digest.sh@60 -- # waitforlisten 71941 /var/tmp/bperf.sock 00:15:15.041 07:42:40 -- common/autotest_common.sh@829 -- # '[' -z 71941 ']' 00:15:15.041 07:42:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:15.041 07:42:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:15.041 07:42:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:15.041 07:42:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.041 07:42:40 -- common/autotest_common.sh@10 -- # set +x 00:15:15.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:15.041 Zero copy mechanism will not be used. 00:15:15.041 [2024-12-02 07:42:40.509031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.041 [2024-12-02 07:42:40.509131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71941 ] 00:15:15.041 [2024-12-02 07:42:40.637191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.300 [2024-12-02 07:42:40.689968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.868 07:42:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.868 07:42:41 -- common/autotest_common.sh@862 -- # return 0 00:15:15.868 07:42:41 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:15.868 07:42:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:16.126 07:42:41 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:16.126 07:42:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.126 07:42:41 -- common/autotest_common.sh@10 -- # set +x 00:15:16.126 07:42:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.126 07:42:41 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:16.126 07:42:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:16.385 nvme0n1 00:15:16.385 07:42:41 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:15:16.385 07:42:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.385 07:42:41 -- common/autotest_common.sh@10 -- # set +x 00:15:16.385 07:42:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.385 07:42:41 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:16.385 07:42:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:16.645 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:16.645 Zero copy mechanism will not be used. 00:15:16.645 Running I/O for 2 seconds... 00:15:16.645 [2024-12-02 07:42:42.029657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.030020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.030061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.034609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.034959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.035000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.039122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.039481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.039520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.044019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.044355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.044413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.048564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.048904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.048943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.053034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.053387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.053425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.057536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.057900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.057938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.062115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.062496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.062535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.066766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.067093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.067134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.071385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.071713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.071751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.076077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.076419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.080687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.081014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.081050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.085383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.085732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.085771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.089950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.090307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.090356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.094588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.094928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.094962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.099058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.645 [2024-12-02 07:42:42.099399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.645 [2024-12-02 07:42:42.099437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.645 [2024-12-02 07:42:42.103742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.104080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.104119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.108282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.108651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.108689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.112814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.113149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.113182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.117454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.117805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.117844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.122020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.122389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.122424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.126717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.127045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.127079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.131241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.131605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.131644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.135769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.136109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.136148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.140341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.140679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.140716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.144901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.145240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.145286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.149521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.149871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.149917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.154095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.154474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.154543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.158696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.159034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.159079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.163244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.163611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.163650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.167817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.168153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.168197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.172444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.172789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.172827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.177012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.177349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.177396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.181614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.181948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.181991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.186119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.186499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.186554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.190788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.191124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.191160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.195457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.195791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.195832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.200071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.200436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.200475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.204749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.205076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.205110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.209220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.209594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.209634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.213790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.214125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.214193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.218266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.218605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.218644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.222760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.646 [2024-12-02 07:42:42.223123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.646 [2024-12-02 07:42:42.223163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.646 [2024-12-02 07:42:42.227311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.227660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.227698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.231778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.232115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.232148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.236434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.236762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.236795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.240967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.241345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.241407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.245637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.245979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.246012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.250263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.250610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.250649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.254907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.255243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.255281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.259518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.259872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.259910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.647 [2024-12-02 07:42:42.264507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.647 [2024-12-02 07:42:42.264910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.647 [2024-12-02 07:42:42.264949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.269509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.269887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.269925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.274643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.274980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.275016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.279275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.279640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.279679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.283747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.284108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.284162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.288319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.288661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.288700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.292796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.293134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.293170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.297363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.297708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.297761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.301845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.302209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.302246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.306435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.306808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.306846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.310966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.311301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.311348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.315526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.315854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.315893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.319944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.320279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.320324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.905 [2024-12-02 07:42:42.324639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.905 [2024-12-02 07:42:42.324970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.905 [2024-12-02 07:42:42.325006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.329245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.329618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.329656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.333870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.334228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.334274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.338598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.338935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.338972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.343155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.343496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.343529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.347836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.348160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.348194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.352465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.352792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.352828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.357132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.357492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.361793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.362118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.362151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.366476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.366839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.371105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.371443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.371476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.375784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.376114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.376147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.380428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.380764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.380799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.384930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.385266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.385328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.389435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.389780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.389818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.393854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.394219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.394254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.398437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.398801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.398838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.402944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.403280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.403324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.407463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.407789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.407827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.411912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.412250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.412286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.416482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.416817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.416863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.420936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.421271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.421339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.425476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.425831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.425876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.429978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.430369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.430401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.434729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.435054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.435090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.439420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.439751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.439785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.444040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.444381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.444413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.448720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.449049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.449082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.453345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.453687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.453735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.457894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.458250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.458284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.462695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.463033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.463071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.467326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.467677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.467715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.471800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.472136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.472169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.476543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.476873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.476906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.481202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.481565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.481604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.485939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.486299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.486348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.490610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.490938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.490972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.495465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.495799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.495838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.501121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.501502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.501532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.506697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.906 [2024-12-02 07:42:42.507052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.906 [2024-12-02 07:42:42.507108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:16.906 [2024-12-02 07:42:42.512609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.907 [2024-12-02 07:42:42.512949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-12-02 07:42:42.512983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:16.907 [2024-12-02 07:42:42.517786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.907 [2024-12-02 07:42:42.518141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-12-02 07:42:42.518210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:16.907 [2024-12-02 07:42:42.522578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:16.907 [2024-12-02 07:42:42.522917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.907 [2024-12-02 07:42:42.522955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.527898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.528249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.528288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.532903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.533239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.533284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.537439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.537779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.537819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.541849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.542213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.542262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.546413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.546782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.546820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.550916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.551254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.551322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.555576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.555931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.555969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.560140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.560491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.560527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.564819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.565147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.565180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.569402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.569731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.569764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.574070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.574432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.574466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.578748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.579078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.579111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.583291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.583661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.583699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.588115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.588490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.592806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.593137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.593184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.597253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.597601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.597644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.601853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.602214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.602250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.606540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.606910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.611110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.611470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.611514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.615768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.616097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.616133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.620455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.620783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.620816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.625149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.625491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.625524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.629646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.629997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.630030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.634115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.634474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.634525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.638945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.639283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.639347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.166 [2024-12-02 07:42:42.643848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.166 [2024-12-02 07:42:42.644202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.166 [2024-12-02 07:42:42.644242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.648957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.649300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.649350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.654235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.654572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.654627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.659293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.659713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.659751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.664171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.664550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.664597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.668989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.669342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.669397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.673622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.673985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.674023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.678088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.678449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.678483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.682740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.683066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.683099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.687426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.687754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.687789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.692036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.692364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.692407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.696728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.697060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.697092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.701272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.701678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.701734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.705780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.706120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.706158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.710365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.710734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.710771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.714987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.715326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.715368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.719632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.719955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.719994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.724087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.724444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.724490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.728619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.728955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.728988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.733205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.733546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.733579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.737652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.738007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.738054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.742244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.742588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.742627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.746738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.747066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.747101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.751283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.751653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.751692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.755742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.756076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.756120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.760195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.760542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.760581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.764685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.765022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.765057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.769246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.769615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.769653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.773692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.774027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.774066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.778154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.778560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.778598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.167 [2024-12-02 07:42:42.782738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.167 [2024-12-02 07:42:42.783076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.167 [2024-12-02 07:42:42.783117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.426 [2024-12-02 07:42:42.787866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.788270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.788340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.792943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.793332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.793384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.797480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.797818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.797854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.802145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.802534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.802574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.806845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.807171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.807207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.811494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.811811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.811848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.815892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.816227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.816260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.820509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.820826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.820865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.825005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.825339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.825380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.829544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.829911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.834032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.834392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.834425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.838682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.839009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.839042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.843293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.843635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.843678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.847680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.848015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.848060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.852127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.852474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.852507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.856717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.857043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.857079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.861334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.861677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.861709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.865823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.866148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.866205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.870569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.870911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.870950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.875274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.875641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.875679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.879896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.880227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.880260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.884565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.884892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.884926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.889052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.889399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.889432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.893534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.893872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.893907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.897918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.898267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.898310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.902689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.903029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.427 [2024-12-02 07:42:42.903065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.427 [2024-12-02 07:42:42.907192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.427 [2024-12-02 07:42:42.907545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.907590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.911729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.912067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.912113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.916192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.920728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.921062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.921095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.925308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.925648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.925684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.929878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.930230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.930264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.934464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.934840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.934878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.939007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.939342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.939388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.943603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.943941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.943985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.948074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.948427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.948470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.952747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.953077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.953112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.957355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.957684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.957716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.961934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.962289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.962333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.966360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.966760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.966798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.970967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.971293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.971340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.975626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.975946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.975984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.980149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.980512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.980549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.984945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.985316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.985378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.990028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.990415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.990461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:42.995135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:42.995485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:42.995544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.000133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.000496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.000542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.005297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.005744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.005785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.010242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.010604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.010659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.015171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.015555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.015595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.020013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.020364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.020418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.024819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.025189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.025229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.029628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.029973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.030018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.428 [2024-12-02 07:42:43.034192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.428 [2024-12-02 07:42:43.034534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.428 [2024-12-02 07:42:43.034573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.429 [2024-12-02 07:42:43.038913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.429 [2024-12-02 07:42:43.039280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.429 [2024-12-02 07:42:43.039327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.429 [2024-12-02 07:42:43.043820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.429 [2024-12-02 07:42:43.044203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.429 [2024-12-02 07:42:43.044244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.689 [2024-12-02 07:42:43.049185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.689 [2024-12-02 07:42:43.049553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.689 [2024-12-02 07:42:43.049599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.689 [2024-12-02 07:42:43.054362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.689 [2024-12-02 07:42:43.054793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.689 [2024-12-02 07:42:43.054832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.689 [2024-12-02 07:42:43.059036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.689 [2024-12-02 07:42:43.059378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.689 [2024-12-02 07:42:43.059431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.689 [2024-12-02 07:42:43.063724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.064060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.064096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.068415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.068761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.068804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.073021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.073366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.073413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.077837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.078211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.078250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.082615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.082958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.082997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.087263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.087623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.087662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.092073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.092419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.092453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.096735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.097071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.097107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.101411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.101810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.101850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.106152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.106505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.106539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.110836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.111184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.111233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.115653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.115991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.116027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.120269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.120622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.120661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.124929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.125289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.125355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.129807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.130142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.130211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.134474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.134878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.134915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.139222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.139592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.139630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.143874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.144213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.144251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.148453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.148779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.148812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.152898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.153235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.153273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.157358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.157700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.157742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.161761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.162102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.162145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.166260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.166602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.166641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.170823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.171197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.175442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.175778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.175816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.179900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.180237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.690 [2024-12-02 07:42:43.180278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.690 [2024-12-02 07:42:43.184511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.690 [2024-12-02 07:42:43.184839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.184872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.189110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.189450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.189486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.193691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.194020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.194053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.198307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.198652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.198705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.203062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.203387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.203411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.207578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.207915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.207953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.212156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.212519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.212553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.216942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.217271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.217316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.221496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.221811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.221848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.225843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.226204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.226238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.230498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.230861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.230904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.235068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.235397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.235442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.239656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.240006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.240049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.244244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.244608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.244645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.248821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.249149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.249181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.253385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.253706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.253739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.257862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.258236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.258276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.262383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.262743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.262781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.266860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.267195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.267238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.271308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.271662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.271699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.275835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.276174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.276214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.280281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.280629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.280667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.284818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.285154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.285196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.289231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.289590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.289628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.293823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.294152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.294209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.298480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.298857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.691 [2024-12-02 07:42:43.298894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.691 [2024-12-02 07:42:43.303035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.691 [2024-12-02 07:42:43.303369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.692 [2024-12-02 07:42:43.303419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.692 [2024-12-02 07:42:43.307810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.692 [2024-12-02 07:42:43.308184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.692 [2024-12-02 07:42:43.308224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.312886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.313222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.313256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.317847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.318229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.318271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.322424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.322853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.322891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.326984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.327326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.327370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.331643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.331976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.332014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.336266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.336606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.336651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.340842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.341169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.341204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.952 [2024-12-02 07:42:43.345435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.952 [2024-12-02 07:42:43.345750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.952 [2024-12-02 07:42:43.345787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.349871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.350254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.354480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.354845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.354886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.359058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.359388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.359437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.363684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.364010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.364043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.368496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.368873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.373110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.373450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.373485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.377771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.378100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.378133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.382456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.382830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.382867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.387042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.387370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.387415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.391722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.392049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.392084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.396273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.396630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.396668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.400776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.401114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.401152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.405396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.405735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.405777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.409834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.410197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.410249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.414410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.414774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.414812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.419011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.419351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.419396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.423754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.424110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.424148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.428301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.428650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.428691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.432796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.433134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.433172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.437286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.437635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.437668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.441734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.442077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.442120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.446384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.446779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.446817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.450880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.451216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.451249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.455420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.455759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.455794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.459854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.460229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.464512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.464849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.464888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.953 [2024-12-02 07:42:43.468958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.953 [2024-12-02 07:42:43.469292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.953 [2024-12-02 07:42:43.469335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.473526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.473862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.473900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.478099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.478484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.478537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.482736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.483087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.483146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.487217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.487568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.487605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.491726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.492062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.492097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.496096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.496445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.496481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.500614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.500952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.500990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.505124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.505470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.505503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.509590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.509929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.509964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.514105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.514476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.514531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.518808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.519136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.519169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.523408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.523758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.523796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.528159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.528501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.528533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.532723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.533062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.533109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.537137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.537486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.537519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.541617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.541953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.541984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.546025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.546399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.546437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.550699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.551022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.551055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.555290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.555652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.555690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.559784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.560122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.560159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.564354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.564691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.564737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:17.954 [2024-12-02 07:42:43.569017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:17.954 [2024-12-02 07:42:43.569410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:17.954 [2024-12-02 07:42:43.569462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.215 [2024-12-02 07:42:43.574151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.215 [2024-12-02 07:42:43.574549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.574588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.578903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.579303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.579356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.583656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.584003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.584047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.588113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.588467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.588506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.592672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.593009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.593053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.597163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.597510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.597543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.601677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.602033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.602071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.606314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.606720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.606757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.610978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.611331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.611376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.615651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.616008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.616046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.620140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.620493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.620525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.624699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.625036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.625074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.629349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.629687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.629733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.633811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.634147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.634213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.638364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.638747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.638783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.642928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.643262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.643323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.647483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.647839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.647877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.651989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.652329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.652377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.656683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.657022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.657061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.661533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.661884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.661922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.666754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.667087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.667129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.671975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.672356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.672420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.677317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.677731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.677769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.682197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.682554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.682594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.687067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.687424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.687480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.691955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.692289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.692368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.696781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.216 [2024-12-02 07:42:43.697120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.216 [2024-12-02 07:42:43.697159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.216 [2024-12-02 07:42:43.701211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.701583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.701621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.705850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.706226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.706266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.710487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.710846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.710879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.715114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.715460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.715493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.719798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.720122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.720169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.724516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.724854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.724896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.729130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.729502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.729548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.733721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.734056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.734096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.738273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.738622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.738661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.742853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.743190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.743235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.747433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.747771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.747822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.751889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.752226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.752264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.756472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.756810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.756854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.760898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.761235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.761273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.765396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.765756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.765793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.769804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.770139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.770195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.774427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.774798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.774836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.778967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.779301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.779347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.783427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.783764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.783808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.787868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.788203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.788246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.792509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.792836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.792871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.797083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.797444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.797492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.801566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.801922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.801960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.806090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.806473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.806530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.810766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.811100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.811144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.815276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.815642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.815680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.819801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.217 [2024-12-02 07:42:43.820139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.217 [2024-12-02 07:42:43.820177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.217 [2024-12-02 07:42:43.824336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.218 [2024-12-02 07:42:43.824672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.218 [2024-12-02 07:42:43.824711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.218 [2024-12-02 07:42:43.828907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.218 [2024-12-02 07:42:43.829243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.218 [2024-12-02 07:42:43.829281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.218 [2024-12-02 07:42:43.833687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.218 [2024-12-02 07:42:43.834104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.218 [2024-12-02 07:42:43.834145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.838886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.839223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.839265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.843903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.844259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.844307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.848434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.848771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.848809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.852876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.853213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.853254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.857445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.857804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.857842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.861889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.862255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.862291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.866448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.866810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.866847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.871071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.871418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.871466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.875728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.876069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.478 [2024-12-02 07:42:43.876108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.478 [2024-12-02 07:42:43.880409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.478 [2024-12-02 07:42:43.880753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.880791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.884997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.885355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.885408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.889483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.889840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.889879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.893933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.894302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.894351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.898623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.898957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.898990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.903144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.903514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.903550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.907705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.908042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.908084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.912195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.912545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.912586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.916670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.917009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.921139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.921486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.921520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.925594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.925935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.925978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.930019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.930373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.930406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.934697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.935033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.935066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.939245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.939619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.939657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.943790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.944126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.944175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.948226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.948587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.948628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.952733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.953071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.953109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.957157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.957506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.957538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.961613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.961949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.961996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.966069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.966456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.966501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.970632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.970986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.971024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.975118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.975472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.975507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.979731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.980067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.980110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.984150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.984499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.984534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.988652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.988988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.989022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.993069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.993416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.993449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:43.997542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:43.997877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.479 [2024-12-02 07:42:43.997921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.479 [2024-12-02 07:42:44.001924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.479 [2024-12-02 07:42:44.002288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.480 [2024-12-02 07:42:44.002332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.480 [2024-12-02 07:42:44.006579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.480 [2024-12-02 07:42:44.006930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.480 [2024-12-02 07:42:44.006971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:18.480 [2024-12-02 07:42:44.011043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.480 [2024-12-02 07:42:44.011380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.480 [2024-12-02 07:42:44.011423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:18.480 [2024-12-02 07:42:44.015547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.480 [2024-12-02 07:42:44.015884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.480 [2024-12-02 07:42:44.015929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:18.480 [2024-12-02 07:42:44.020046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa06100) with pdu=0x2000190fef90 00:15:18.480 [2024-12-02 07:42:44.020389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:18.480 [2024-12-02 07:42:44.020423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:18.480 00:15:18.480 Latency(us) 00:15:18.480 [2024-12-02T07:42:44.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.480 [2024-12-02T07:42:44.104Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:15:18.480 nvme0n1 : 2.00 6679.93 834.99 0.00 0.00 2390.26 1951.19 7923.90 00:15:18.480 [2024-12-02T07:42:44.104Z] =================================================================================================================== 00:15:18.480 [2024-12-02T07:42:44.104Z] Total : 6679.93 834.99 0.00 0.00 2390.26 1951.19 7923.90 00:15:18.480 0 00:15:18.480 07:42:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:18.480 07:42:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:18.480 07:42:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:18.480 07:42:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:18.480 | .driver_specific 00:15:18.480 | .nvme_error 00:15:18.480 | .status_code 00:15:18.480 | .command_transient_transport_error' 00:15:18.740 07:42:44 -- host/digest.sh@71 -- # (( 431 > 0 )) 00:15:18.740 07:42:44 -- host/digest.sh@73 -- # killprocess 71941 00:15:18.740 07:42:44 -- common/autotest_common.sh@936 -- # '[' -z 71941 ']' 00:15:18.740 07:42:44 -- common/autotest_common.sh@940 -- # kill -0 71941 00:15:18.740 07:42:44 -- common/autotest_common.sh@941 -- # uname 00:15:18.740 07:42:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.740 07:42:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71941 00:15:18.740 07:42:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:18.740 killing process with pid 71941 00:15:18.740 07:42:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:18.740 07:42:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71941' 00:15:18.740 Received shutdown signal, test time was about 2.000000 seconds 00:15:18.740 00:15:18.740 Latency(us) 00:15:18.740 [2024-12-02T07:42:44.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.740 [2024-12-02T07:42:44.364Z] =================================================================================================================== 00:15:18.740 [2024-12-02T07:42:44.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:18.740 07:42:44 -- common/autotest_common.sh@955 -- # kill 71941 00:15:18.740 07:42:44 -- common/autotest_common.sh@960 -- # wait 71941 00:15:18.999 07:42:44 -- host/digest.sh@115 -- # killprocess 71747 00:15:18.999 07:42:44 -- common/autotest_common.sh@936 -- # '[' -z 71747 ']' 00:15:18.999 07:42:44 -- common/autotest_common.sh@940 -- # kill -0 71747 00:15:18.999 07:42:44 -- common/autotest_common.sh@941 -- # uname 00:15:18.999 07:42:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.999 07:42:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71747 00:15:18.999 07:42:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.999 07:42:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.999 killing process with pid 71747 00:15:18.999 07:42:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71747' 00:15:18.999 07:42:44 -- common/autotest_common.sh@955 -- # kill 71747 00:15:18.999 07:42:44 -- common/autotest_common.sh@960 -- # wait 71747 00:15:19.259 00:15:19.259 real 0m16.959s 00:15:19.259 user 0m33.235s 00:15:19.259 sys 0m4.423s 00:15:19.259 07:42:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:19.259 07:42:44 -- common/autotest_common.sh@10 -- # set +x 00:15:19.259 ************************************ 00:15:19.259 END TEST nvmf_digest_error 00:15:19.259 ************************************ 00:15:19.259 07:42:44 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:15:19.259 07:42:44 -- host/digest.sh@139 -- # nvmftestfini 00:15:19.259 07:42:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.259 07:42:44 -- nvmf/common.sh@116 -- # sync 00:15:19.259 07:42:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.259 07:42:44 -- nvmf/common.sh@119 -- # set +e 00:15:19.259 07:42:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.259 07:42:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.259 rmmod nvme_tcp 00:15:19.259 rmmod nvme_fabrics 00:15:19.259 rmmod nvme_keyring 00:15:19.259 07:42:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.259 07:42:44 -- nvmf/common.sh@123 -- # set -e 00:15:19.259 07:42:44 -- nvmf/common.sh@124 -- # return 0 00:15:19.259 07:42:44 -- nvmf/common.sh@477 -- # '[' -n 71747 ']' 00:15:19.259 07:42:44 -- nvmf/common.sh@478 -- # killprocess 71747 00:15:19.259 07:42:44 -- common/autotest_common.sh@936 -- # '[' -z 71747 ']' 00:15:19.259 07:42:44 -- common/autotest_common.sh@940 -- # kill -0 71747 00:15:19.259 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (71747) - No such process 00:15:19.259 Process with pid 71747 is not found 00:15:19.259 07:42:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 71747 is not found' 00:15:19.259 07:42:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.259 07:42:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.259 07:42:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.259 07:42:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.259 07:42:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.259 07:42:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.259 07:42:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.259 07:42:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.259 07:42:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:19.519 00:15:19.519 real 0m32.897s 00:15:19.519 user 1m2.686s 00:15:19.519 sys 0m9.063s 00:15:19.519 07:42:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:19.519 07:42:44 -- common/autotest_common.sh@10 -- # set +x 00:15:19.519 ************************************ 00:15:19.519 END TEST nvmf_digest 00:15:19.519 ************************************ 00:15:19.519 07:42:44 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:15:19.519 07:42:44 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:15:19.519 07:42:44 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:15:19.519 07:42:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.519 07:42:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.519 07:42:44 -- common/autotest_common.sh@10 -- # set +x 00:15:19.519 ************************************ 00:15:19.519 START TEST nvmf_multipath 00:15:19.519 ************************************ 00:15:19.519 07:42:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:15:19.519 * Looking for test storage... 00:15:19.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:19.519 07:42:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:19.519 07:42:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:19.519 07:42:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:19.519 07:42:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:19.519 07:42:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:19.519 07:42:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:19.519 07:42:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:19.519 07:42:45 -- scripts/common.sh@335 -- # IFS=.-: 00:15:19.519 07:42:45 -- scripts/common.sh@335 -- # read -ra ver1 00:15:19.519 07:42:45 -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.519 07:42:45 -- scripts/common.sh@336 -- # read -ra ver2 00:15:19.519 07:42:45 -- scripts/common.sh@337 -- # local 'op=<' 00:15:19.519 07:42:45 -- scripts/common.sh@339 -- # ver1_l=2 00:15:19.519 07:42:45 -- scripts/common.sh@340 -- # ver2_l=1 00:15:19.519 07:42:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:19.519 07:42:45 -- scripts/common.sh@343 -- # case "$op" in 00:15:19.519 07:42:45 -- scripts/common.sh@344 -- # : 1 00:15:19.519 07:42:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:19.519 07:42:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.519 07:42:45 -- scripts/common.sh@364 -- # decimal 1 00:15:19.519 07:42:45 -- scripts/common.sh@352 -- # local d=1 00:15:19.519 07:42:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.519 07:42:45 -- scripts/common.sh@354 -- # echo 1 00:15:19.519 07:42:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:19.519 07:42:45 -- scripts/common.sh@365 -- # decimal 2 00:15:19.519 07:42:45 -- scripts/common.sh@352 -- # local d=2 00:15:19.519 07:42:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.519 07:42:45 -- scripts/common.sh@354 -- # echo 2 00:15:19.519 07:42:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:19.519 07:42:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:19.519 07:42:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:19.519 07:42:45 -- scripts/common.sh@367 -- # return 0 00:15:19.519 07:42:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.519 07:42:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:19.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.519 --rc genhtml_branch_coverage=1 00:15:19.519 --rc genhtml_function_coverage=1 00:15:19.520 --rc genhtml_legend=1 00:15:19.520 --rc geninfo_all_blocks=1 00:15:19.520 --rc geninfo_unexecuted_blocks=1 00:15:19.520 00:15:19.520 ' 00:15:19.520 07:42:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:19.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.520 --rc genhtml_branch_coverage=1 00:15:19.520 --rc genhtml_function_coverage=1 00:15:19.520 --rc genhtml_legend=1 00:15:19.520 --rc geninfo_all_blocks=1 00:15:19.520 --rc geninfo_unexecuted_blocks=1 00:15:19.520 00:15:19.520 ' 00:15:19.520 07:42:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:19.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.520 --rc genhtml_branch_coverage=1 00:15:19.520 --rc genhtml_function_coverage=1 00:15:19.520 --rc genhtml_legend=1 00:15:19.520 --rc geninfo_all_blocks=1 00:15:19.520 --rc geninfo_unexecuted_blocks=1 00:15:19.520 00:15:19.520 ' 00:15:19.520 07:42:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:19.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.520 --rc genhtml_branch_coverage=1 00:15:19.520 --rc genhtml_function_coverage=1 00:15:19.520 --rc genhtml_legend=1 00:15:19.520 --rc geninfo_all_blocks=1 00:15:19.520 --rc geninfo_unexecuted_blocks=1 00:15:19.520 00:15:19.520 ' 00:15:19.520 07:42:45 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.520 07:42:45 -- nvmf/common.sh@7 -- # uname -s 00:15:19.520 07:42:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.520 07:42:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.520 07:42:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.520 07:42:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.520 07:42:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.520 07:42:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.520 07:42:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.520 07:42:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.520 07:42:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.520 07:42:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.520 07:42:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:15:19.520 07:42:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:15:19.520 07:42:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.520 07:42:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.520 07:42:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.520 07:42:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.520 07:42:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.520 07:42:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.520 07:42:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.520 07:42:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.520 07:42:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.520 07:42:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.520 07:42:45 -- paths/export.sh@5 -- # export PATH 00:15:19.520 07:42:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.520 07:42:45 -- nvmf/common.sh@46 -- # : 0 00:15:19.520 07:42:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:19.520 07:42:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:19.520 07:42:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:19.520 07:42:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.520 07:42:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.520 07:42:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:19.520 07:42:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:19.520 07:42:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:19.520 07:42:45 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:19.520 07:42:45 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:19.520 07:42:45 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.520 07:42:45 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:19.520 07:42:45 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:19.520 07:42:45 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:19.520 07:42:45 -- host/multipath.sh@30 -- # nvmftestinit 00:15:19.520 07:42:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:19.520 07:42:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.520 07:42:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:19.520 07:42:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:19.520 07:42:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:19.520 07:42:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.520 07:42:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.520 07:42:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.778 07:42:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:19.778 07:42:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:19.778 07:42:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:19.778 07:42:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:19.778 07:42:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:19.778 07:42:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:19.778 07:42:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.778 07:42:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.778 07:42:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:19.778 07:42:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:19.778 07:42:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.778 07:42:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.778 07:42:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.778 07:42:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.778 07:42:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.778 07:42:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.778 07:42:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.778 07:42:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.778 07:42:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:19.778 07:42:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:19.778 Cannot find device "nvmf_tgt_br" 00:15:19.778 07:42:45 -- nvmf/common.sh@154 -- # true 00:15:19.778 07:42:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.778 Cannot find device "nvmf_tgt_br2" 00:15:19.778 07:42:45 -- nvmf/common.sh@155 -- # true 00:15:19.778 07:42:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:19.778 07:42:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:19.778 Cannot find device "nvmf_tgt_br" 00:15:19.778 07:42:45 -- nvmf/common.sh@157 -- # true 00:15:19.778 07:42:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:19.778 Cannot find device "nvmf_tgt_br2" 00:15:19.778 07:42:45 -- nvmf/common.sh@158 -- # true 00:15:19.778 07:42:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:19.778 07:42:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:19.778 07:42:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.778 07:42:45 -- nvmf/common.sh@161 -- # true 00:15:19.778 07:42:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.778 07:42:45 -- nvmf/common.sh@162 -- # true 00:15:19.778 07:42:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.778 07:42:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.778 07:42:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.778 07:42:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.778 07:42:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.778 07:42:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.778 07:42:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.778 07:42:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:19.778 07:42:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:19.778 07:42:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:19.778 07:42:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:20.037 07:42:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:20.037 07:42:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:20.037 07:42:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.037 07:42:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.037 07:42:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.037 07:42:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:20.037 07:42:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:20.037 07:42:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.037 07:42:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.037 07:42:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.037 07:42:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.037 07:42:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.037 07:42:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:20.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:20.037 00:15:20.037 --- 10.0.0.2 ping statistics --- 00:15:20.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.037 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:20.037 07:42:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:20.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:20.037 00:15:20.037 --- 10.0.0.3 ping statistics --- 00:15:20.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.037 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:20.037 07:42:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:20.037 00:15:20.037 --- 10.0.0.1 ping statistics --- 00:15:20.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.037 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:20.037 07:42:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.037 07:42:45 -- nvmf/common.sh@421 -- # return 0 00:15:20.037 07:42:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.037 07:42:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.037 07:42:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.037 07:42:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.037 07:42:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.037 07:42:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.037 07:42:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.037 07:42:45 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:15:20.037 07:42:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:20.037 07:42:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.037 07:42:45 -- common/autotest_common.sh@10 -- # set +x 00:15:20.037 07:42:45 -- nvmf/common.sh@469 -- # nvmfpid=72216 00:15:20.037 07:42:45 -- nvmf/common.sh@470 -- # waitforlisten 72216 00:15:20.037 07:42:45 -- common/autotest_common.sh@829 -- # '[' -z 72216 ']' 00:15:20.037 07:42:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.037 07:42:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:20.038 07:42:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.038 07:42:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.038 07:42:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.038 07:42:45 -- common/autotest_common.sh@10 -- # set +x 00:15:20.038 [2024-12-02 07:42:45.567267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.038 [2024-12-02 07:42:45.567366] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.297 [2024-12-02 07:42:45.707180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:20.297 [2024-12-02 07:42:45.776766] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.297 [2024-12-02 07:42:45.776936] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.297 [2024-12-02 07:42:45.776954] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.297 [2024-12-02 07:42:45.776965] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.297 [2024-12-02 07:42:45.777432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.297 [2024-12-02 07:42:45.777459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.232 07:42:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.232 07:42:46 -- common/autotest_common.sh@862 -- # return 0 00:15:21.232 07:42:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:21.232 07:42:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.232 07:42:46 -- common/autotest_common.sh@10 -- # set +x 00:15:21.232 07:42:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.232 07:42:46 -- host/multipath.sh@33 -- # nvmfapp_pid=72216 00:15:21.232 07:42:46 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:21.232 [2024-12-02 07:42:46.799523] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.232 07:42:46 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:21.492 Malloc0 00:15:21.492 07:42:47 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:21.750 07:42:47 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.009 07:42:47 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.267 [2024-12-02 07:42:47.706025] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.267 07:42:47 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:22.525 [2024-12-02 07:42:47.926128] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:22.525 07:42:47 -- host/multipath.sh@44 -- # bdevperf_pid=72272 00:15:22.525 07:42:47 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:22.525 07:42:47 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:22.525 07:42:47 -- host/multipath.sh@47 -- # waitforlisten 72272 /var/tmp/bdevperf.sock 00:15:22.525 07:42:47 -- common/autotest_common.sh@829 -- # '[' -z 72272 ']' 00:15:22.525 07:42:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.525 07:42:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.525 07:42:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.525 07:42:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.525 07:42:47 -- common/autotest_common.sh@10 -- # set +x 00:15:23.461 07:42:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.461 07:42:48 -- common/autotest_common.sh@862 -- # return 0 00:15:23.461 07:42:48 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:23.720 07:42:49 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:23.979 Nvme0n1 00:15:23.979 07:42:49 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:24.238 Nvme0n1 00:15:24.238 07:42:49 -- host/multipath.sh@78 -- # sleep 1 00:15:24.238 07:42:49 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:25.616 07:42:50 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:15:25.616 07:42:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:25.616 07:42:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:25.875 07:42:51 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:15:25.875 07:42:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72216 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:25.875 07:42:51 -- host/multipath.sh@65 -- # dtrace_pid=72317 00:15:25.875 07:42:51 -- host/multipath.sh@66 -- # sleep 6 00:15:32.433 07:42:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:32.433 07:42:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:15:32.433 07:42:57 -- host/multipath.sh@67 -- # active_port=4421 00:15:32.433 07:42:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:32.433 Attaching 4 probes... 00:15:32.433 @path[10.0.0.2, 4421]: 21076 00:15:32.433 @path[10.0.0.2, 4421]: 21634 00:15:32.433 @path[10.0.0.2, 4421]: 21598 00:15:32.433 @path[10.0.0.2, 4421]: 21639 00:15:32.433 @path[10.0.0.2, 4421]: 21965 00:15:32.433 07:42:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:32.433 07:42:57 -- host/multipath.sh@69 -- # sed -n 1p 00:15:32.433 07:42:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:32.433 07:42:57 -- host/multipath.sh@69 -- # port=4421 00:15:32.433 07:42:57 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:15:32.433 07:42:57 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:15:32.433 07:42:57 -- host/multipath.sh@72 -- # kill 72317 00:15:32.433 07:42:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:32.433 07:42:57 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:15:32.433 07:42:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:32.433 07:42:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:32.692 07:42:58 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:15:32.692 07:42:58 -- host/multipath.sh@65 -- # dtrace_pid=72438 00:15:32.692 07:42:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72216 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:32.692 07:42:58 -- host/multipath.sh@66 -- # sleep 6 00:15:39.258 07:43:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:39.258 07:43:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:15:39.258 07:43:04 -- host/multipath.sh@67 -- # active_port=4420 00:15:39.258 07:43:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:39.258 Attaching 4 probes... 00:15:39.258 @path[10.0.0.2, 4420]: 21557 00:15:39.258 @path[10.0.0.2, 4420]: 21660 00:15:39.258 @path[10.0.0.2, 4420]: 21843 00:15:39.258 @path[10.0.0.2, 4420]: 22081 00:15:39.258 @path[10.0.0.2, 4420]: 21816 00:15:39.258 07:43:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:39.258 07:43:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:39.258 07:43:04 -- host/multipath.sh@69 -- # sed -n 1p 00:15:39.258 07:43:04 -- host/multipath.sh@69 -- # port=4420 00:15:39.258 07:43:04 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:15:39.258 07:43:04 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:15:39.258 07:43:04 -- host/multipath.sh@72 -- # kill 72438 00:15:39.258 07:43:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:39.258 07:43:04 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:15:39.258 07:43:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:39.258 07:43:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:39.258 07:43:04 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:15:39.258 07:43:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72216 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:39.258 07:43:04 -- host/multipath.sh@65 -- # dtrace_pid=72547 00:15:39.258 07:43:04 -- host/multipath.sh@66 -- # sleep 6 00:15:45.845 07:43:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:45.845 07:43:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:15:45.845 07:43:11 -- host/multipath.sh@67 -- # active_port=4421 00:15:45.845 07:43:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:45.845 Attaching 4 probes... 00:15:45.845 @path[10.0.0.2, 4421]: 14682 00:15:45.845 @path[10.0.0.2, 4421]: 21352 00:15:45.845 @path[10.0.0.2, 4421]: 21624 00:15:45.845 @path[10.0.0.2, 4421]: 21450 00:15:45.845 @path[10.0.0.2, 4421]: 21310 00:15:45.845 07:43:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:45.845 07:43:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:45.845 07:43:11 -- host/multipath.sh@69 -- # sed -n 1p 00:15:45.845 07:43:11 -- host/multipath.sh@69 -- # port=4421 00:15:45.845 07:43:11 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:15:45.845 07:43:11 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:15:45.845 07:43:11 -- host/multipath.sh@72 -- # kill 72547 00:15:45.845 07:43:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:45.845 07:43:11 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:15:45.845 07:43:11 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:45.845 07:43:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:46.103 07:43:11 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:15:46.103 07:43:11 -- host/multipath.sh@65 -- # dtrace_pid=72665 00:15:46.103 07:43:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72216 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:46.103 07:43:11 -- host/multipath.sh@66 -- # sleep 6 00:15:52.657 07:43:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:52.657 07:43:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:15:52.657 07:43:17 -- host/multipath.sh@67 -- # active_port= 00:15:52.657 07:43:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:52.657 Attaching 4 probes... 00:15:52.657 00:15:52.657 00:15:52.657 00:15:52.657 00:15:52.657 00:15:52.657 07:43:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:52.657 07:43:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:52.657 07:43:17 -- host/multipath.sh@69 -- # sed -n 1p 00:15:52.657 07:43:17 -- host/multipath.sh@69 -- # port= 00:15:52.657 07:43:17 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:15:52.657 07:43:17 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:15:52.657 07:43:17 -- host/multipath.sh@72 -- # kill 72665 00:15:52.657 07:43:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:52.657 07:43:17 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:15:52.657 07:43:17 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:52.657 07:43:18 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:52.916 07:43:18 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:15:52.916 07:43:18 -- host/multipath.sh@65 -- # dtrace_pid=72785 00:15:52.916 07:43:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72216 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:52.916 07:43:18 -- host/multipath.sh@66 -- # sleep 6 00:15:59.483 07:43:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:59.483 07:43:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:15:59.483 07:43:24 -- host/multipath.sh@67 -- # active_port=4421 00:15:59.483 07:43:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:59.483 Attaching 4 probes... 00:15:59.483 @path[10.0.0.2, 4421]: 20937 00:15:59.483 @path[10.0.0.2, 4421]: 21039 00:15:59.483 @path[10.0.0.2, 4421]: 21109 00:15:59.483 @path[10.0.0.2, 4421]: 21155 00:15:59.483 @path[10.0.0.2, 4421]: 21074 00:15:59.483 07:43:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:59.483 07:43:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:59.483 07:43:24 -- host/multipath.sh@69 -- # sed -n 1p 00:15:59.483 07:43:24 -- host/multipath.sh@69 -- # port=4421 00:15:59.483 07:43:24 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:15:59.483 07:43:24 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:15:59.483 07:43:24 -- host/multipath.sh@72 -- # kill 72785 00:15:59.483 07:43:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:59.483 07:43:24 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:59.483 [2024-12-02 07:43:24.890218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.483 [2024-12-02 07:43:24.890523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 [2024-12-02 07:43:24.890804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04230 is same with the state(5) to be set 00:15:59.484 07:43:24 -- host/multipath.sh@101 -- # sleep 1 00:16:00.419 07:43:25 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:16:00.419 07:43:25 -- host/multipath.sh@65 -- # dtrace_pid=72909 00:16:00.419 07:43:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72216 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:00.419 07:43:25 -- host/multipath.sh@66 -- # sleep 6 00:16:06.985 07:43:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:06.985 07:43:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:06.985 07:43:32 -- host/multipath.sh@67 -- # active_port=4420 00:16:06.985 07:43:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:06.985 Attaching 4 probes... 00:16:06.985 @path[10.0.0.2, 4420]: 20545 00:16:06.985 @path[10.0.0.2, 4420]: 21023 00:16:06.985 @path[10.0.0.2, 4420]: 20853 00:16:06.985 @path[10.0.0.2, 4420]: 21106 00:16:06.985 @path[10.0.0.2, 4420]: 21302 00:16:06.985 07:43:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:06.985 07:43:32 -- host/multipath.sh@69 -- # sed -n 1p 00:16:06.985 07:43:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:06.985 07:43:32 -- host/multipath.sh@69 -- # port=4420 00:16:06.985 07:43:32 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:06.985 07:43:32 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:06.985 07:43:32 -- host/multipath.sh@72 -- # kill 72909 00:16:06.985 07:43:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:06.985 07:43:32 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:06.985 [2024-12-02 07:43:32.412784] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:06.986 07:43:32 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:07.244 07:43:32 -- host/multipath.sh@111 -- # sleep 6 00:16:13.809 07:43:38 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:16:13.809 07:43:38 -- host/multipath.sh@65 -- # dtrace_pid=73083 00:16:13.809 07:43:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72216 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:13.809 07:43:38 -- host/multipath.sh@66 -- # sleep 6 00:16:20.385 07:43:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:20.385 07:43:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:20.385 07:43:44 -- host/multipath.sh@67 -- # active_port=4421 00:16:20.385 07:43:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:20.385 Attaching 4 probes... 00:16:20.385 @path[10.0.0.2, 4421]: 20587 00:16:20.385 @path[10.0.0.2, 4421]: 21140 00:16:20.385 @path[10.0.0.2, 4421]: 21111 00:16:20.385 @path[10.0.0.2, 4421]: 21074 00:16:20.385 @path[10.0.0.2, 4421]: 21203 00:16:20.385 07:43:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:20.385 07:43:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:20.385 07:43:44 -- host/multipath.sh@69 -- # sed -n 1p 00:16:20.385 07:43:44 -- host/multipath.sh@69 -- # port=4421 00:16:20.385 07:43:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:20.385 07:43:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:20.385 07:43:44 -- host/multipath.sh@72 -- # kill 73083 00:16:20.385 07:43:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:20.385 07:43:44 -- host/multipath.sh@114 -- # killprocess 72272 00:16:20.385 07:43:44 -- common/autotest_common.sh@936 -- # '[' -z 72272 ']' 00:16:20.385 07:43:44 -- common/autotest_common.sh@940 -- # kill -0 72272 00:16:20.385 07:43:44 -- common/autotest_common.sh@941 -- # uname 00:16:20.385 07:43:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.385 07:43:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72272 00:16:20.385 killing process with pid 72272 00:16:20.385 07:43:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:20.385 07:43:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:20.385 07:43:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72272' 00:16:20.385 07:43:45 -- common/autotest_common.sh@955 -- # kill 72272 00:16:20.385 07:43:45 -- common/autotest_common.sh@960 -- # wait 72272 00:16:20.385 Connection closed with partial response: 00:16:20.385 00:16:20.385 00:16:20.385 07:43:45 -- host/multipath.sh@116 -- # wait 72272 00:16:20.385 07:43:45 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:20.385 [2024-12-02 07:42:47.995524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.385 [2024-12-02 07:42:47.995623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72272 ] 00:16:20.385 [2024-12-02 07:42:48.136715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.385 [2024-12-02 07:42:48.203004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.385 Running I/O for 90 seconds... 00:16:20.385 [2024-12-02 07:42:58.109549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.385 [2024-12-02 07:42:58.109614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.109703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.385 [2024-12-02 07:42:58.109740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.109773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.385 [2024-12-02 07:42:58.109807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.109841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.109874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.109908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.109942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.385 [2024-12-02 07:42:58.109975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.109995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.385 [2024-12-02 07:42:58.110164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.385 [2024-12-02 07:42:58.110559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.385 [2024-12-02 07:42:58.110609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:20.385 [2024-12-02 07:42:58.110630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.385 [2024-12-02 07:42:58.110645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.110665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.110680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.110701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.110715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.110737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.110752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.110772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.110787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.110808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.110824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.110844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.110859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.110880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.110895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.111077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.111114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.111162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.111305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.111428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.111863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.111934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.111969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.111989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.112004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.112024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.112045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.112067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.386 [2024-12-02 07:42:58.112082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.112117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.112138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.112153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.112174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.386 [2024-12-02 07:42:58.112188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:20.386 [2024-12-02 07:42:58.112209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.112963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.112984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.112999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.113041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.113183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.113253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.387 [2024-12-02 07:42:58.113289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.113583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.113598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:20.387 [2024-12-02 07:42:58.115253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.387 [2024-12-02 07:42:58.115284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.115481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.115527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.115709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.115777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.115846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:42:58.115880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.115914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.115956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.115992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.116011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:42:58.116032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:42:58.116047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:43:04.620345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:43:04.620513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:43:04.620888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:43:04.620920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.388 [2024-12-02 07:43:04.620954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:20.388 [2024-12-02 07:43:04.620972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.388 [2024-12-02 07:43:04.620985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.621017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.621097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.621189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.621279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.621331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.621436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.621932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.621967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.621987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.622002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.622036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.622085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.622119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.622152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.622203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.622284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.622341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.389 [2024-12-02 07:43:04.622381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.389 [2024-12-02 07:43:04.622419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:20.389 [2024-12-02 07:43:04.622441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.622457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.622576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.622625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.622693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.622972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.622987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.623053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.623087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.623142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.623244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.623287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.390 [2024-12-02 07:43:04.623873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:20.390 [2024-12-02 07:43:04.623893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.390 [2024-12-02 07:43:04.623907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.623927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.623941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.623962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.623977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.623997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.624026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.624277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.624309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.624355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.624592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.624613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.625398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.625447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.625492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.625536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.625577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.625637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.625681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.625724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.625767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.625811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.625854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.625897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.625966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.625994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.626008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.626051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.626071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.626099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.626115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.626142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.626157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.626184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.626198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.626258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.391 [2024-12-02 07:43:04.626276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:20.391 [2024-12-02 07:43:04.626306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.391 [2024-12-02 07:43:04.626337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.604796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.604865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.604933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.604952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.604973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.604988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.392 [2024-12-02 07:43:11.605827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.605983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.605997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.392 [2024-12-02 07:43:11.606272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.392 [2024-12-02 07:43:11.606287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.606800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.606973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.606993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.607281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.607362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.393 [2024-12-02 07:43:11.607530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.393 [2024-12-02 07:43:11.607630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:20.393 [2024-12-02 07:43:11.607649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.607941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.607974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.607993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.608485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.608720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.608734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.609566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.609593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.609626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.609643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.609671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.609701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.609728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.394 [2024-12-02 07:43:11.609743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:20.394 [2024-12-02 07:43:11.609770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.394 [2024-12-02 07:43:11.609784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.609810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.609825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.609851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.609865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.609892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.609916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.609945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.609960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.609986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.610001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.610041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.610081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.610122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.610163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.610247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.610311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.610374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.610418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:11.610467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.610521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:11.610566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:11.610597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.395 [2024-12-02 07:43:24.891931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.395 [2024-12-02 07:43:24.891944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.395 [2024-12-02 07:43:24.891957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.891987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.892955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.892981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.892995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.893008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.893023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.893035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.893049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.396 [2024-12-02 07:43:24.893067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.396 [2024-12-02 07:43:24.893082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.396 [2024-12-02 07:43:24.893095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.397 [2024-12-02 07:43:24.893908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.893976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.893989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.397 [2024-12-02 07:43:24.894200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.397 [2024-12-02 07:43:24.894213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.398 [2024-12-02 07:43:24.894819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.894975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.398 [2024-12-02 07:43:24.894987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.895001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76bc50 is same with the state(5) to be set 00:16:20.398 [2024-12-02 07:43:24.895016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:20.398 [2024-12-02 07:43:24.895026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:20.398 [2024-12-02 07:43:24.895038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71544 len:8 PRP1 0x0 PRP2 0x0 00:16:20.398 [2024-12-02 07:43:24.895052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.398 [2024-12-02 07:43:24.895095] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x76bc50 was disconnected and freed. reset controller. 00:16:20.398 [2024-12-02 07:43:24.896049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:20.398 [2024-12-02 07:43:24.896127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x748b20 (9): Bad file descriptor 00:16:20.398 [2024-12-02 07:43:24.896429] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:20.398 [2024-12-02 07:43:24.896502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:20.398 [2024-12-02 07:43:24.896550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:20.398 [2024-12-02 07:43:24.896571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x748b20 with addr=10.0.0.2, port=4421 00:16:20.398 [2024-12-02 07:43:24.896587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x748b20 is same with the state(5) to be set 00:16:20.398 [2024-12-02 07:43:24.896618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x748b20 (9): Bad file descriptor 00:16:20.398 [2024-12-02 07:43:24.896648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:20.398 [2024-12-02 07:43:24.896664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:20.398 [2024-12-02 07:43:24.896677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:20.398 [2024-12-02 07:43:24.896706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:20.398 [2024-12-02 07:43:24.896723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:20.398 [2024-12-02 07:43:34.952163] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:20.398 Received shutdown signal, test time was about 55.090359 seconds 00:16:20.398 00:16:20.398 Latency(us) 00:16:20.398 [2024-12-02T07:43:46.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.398 [2024-12-02T07:43:46.022Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:20.398 Verification LBA range: start 0x0 length 0x4000 00:16:20.399 Nvme0n1 : 55.09 12107.17 47.29 0.00 0.00 10554.48 372.36 7015926.69 00:16:20.399 [2024-12-02T07:43:46.023Z] =================================================================================================================== 00:16:20.399 [2024-12-02T07:43:46.023Z] Total : 12107.17 47.29 0.00 0.00 10554.48 372.36 7015926.69 00:16:20.399 07:43:45 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.399 07:43:45 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:16:20.399 07:43:45 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:20.399 07:43:45 -- host/multipath.sh@125 -- # nvmftestfini 00:16:20.399 07:43:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:20.399 07:43:45 -- nvmf/common.sh@116 -- # sync 00:16:20.399 07:43:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:20.399 07:43:45 -- nvmf/common.sh@119 -- # set +e 00:16:20.399 07:43:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:20.399 07:43:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:20.399 rmmod nvme_tcp 00:16:20.399 rmmod nvme_fabrics 00:16:20.399 rmmod nvme_keyring 00:16:20.399 07:43:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:20.399 07:43:45 -- nvmf/common.sh@123 -- # set -e 00:16:20.399 07:43:45 -- nvmf/common.sh@124 -- # return 0 00:16:20.399 07:43:45 -- nvmf/common.sh@477 -- # '[' -n 72216 ']' 00:16:20.399 07:43:45 -- nvmf/common.sh@478 -- # killprocess 72216 00:16:20.399 07:43:45 -- common/autotest_common.sh@936 -- # '[' -z 72216 ']' 00:16:20.399 07:43:45 -- common/autotest_common.sh@940 -- # kill -0 72216 00:16:20.399 07:43:45 -- common/autotest_common.sh@941 -- # uname 00:16:20.399 07:43:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.399 07:43:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72216 00:16:20.399 07:43:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:20.399 killing process with pid 72216 00:16:20.399 07:43:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:20.399 07:43:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72216' 00:16:20.399 07:43:45 -- common/autotest_common.sh@955 -- # kill 72216 00:16:20.399 07:43:45 -- common/autotest_common.sh@960 -- # wait 72216 00:16:20.399 07:43:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:20.399 07:43:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:20.399 07:43:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:20.399 07:43:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.399 07:43:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:20.399 07:43:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.399 07:43:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.399 07:43:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.399 07:43:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:20.399 ************************************ 00:16:20.399 END TEST nvmf_multipath 00:16:20.399 ************************************ 00:16:20.399 00:16:20.399 real 1m0.877s 00:16:20.399 user 2m48.178s 00:16:20.399 sys 0m18.162s 00:16:20.399 07:43:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:20.399 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:16:20.399 07:43:45 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:16:20.399 07:43:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:20.399 07:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.399 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:16:20.399 ************************************ 00:16:20.399 START TEST nvmf_timeout 00:16:20.399 ************************************ 00:16:20.399 07:43:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:16:20.399 * Looking for test storage... 00:16:20.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:20.399 07:43:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:20.399 07:43:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:20.399 07:43:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:20.658 07:43:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:20.658 07:43:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:20.658 07:43:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:20.658 07:43:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:20.658 07:43:46 -- scripts/common.sh@335 -- # IFS=.-: 00:16:20.658 07:43:46 -- scripts/common.sh@335 -- # read -ra ver1 00:16:20.658 07:43:46 -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.658 07:43:46 -- scripts/common.sh@336 -- # read -ra ver2 00:16:20.658 07:43:46 -- scripts/common.sh@337 -- # local 'op=<' 00:16:20.658 07:43:46 -- scripts/common.sh@339 -- # ver1_l=2 00:16:20.658 07:43:46 -- scripts/common.sh@340 -- # ver2_l=1 00:16:20.658 07:43:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:20.658 07:43:46 -- scripts/common.sh@343 -- # case "$op" in 00:16:20.658 07:43:46 -- scripts/common.sh@344 -- # : 1 00:16:20.658 07:43:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:20.658 07:43:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.658 07:43:46 -- scripts/common.sh@364 -- # decimal 1 00:16:20.658 07:43:46 -- scripts/common.sh@352 -- # local d=1 00:16:20.658 07:43:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.658 07:43:46 -- scripts/common.sh@354 -- # echo 1 00:16:20.658 07:43:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:20.658 07:43:46 -- scripts/common.sh@365 -- # decimal 2 00:16:20.658 07:43:46 -- scripts/common.sh@352 -- # local d=2 00:16:20.658 07:43:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.658 07:43:46 -- scripts/common.sh@354 -- # echo 2 00:16:20.658 07:43:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:20.658 07:43:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:20.658 07:43:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:20.658 07:43:46 -- scripts/common.sh@367 -- # return 0 00:16:20.658 07:43:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.658 07:43:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:20.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.658 --rc genhtml_branch_coverage=1 00:16:20.659 --rc genhtml_function_coverage=1 00:16:20.659 --rc genhtml_legend=1 00:16:20.659 --rc geninfo_all_blocks=1 00:16:20.659 --rc geninfo_unexecuted_blocks=1 00:16:20.659 00:16:20.659 ' 00:16:20.659 07:43:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.659 --rc genhtml_branch_coverage=1 00:16:20.659 --rc genhtml_function_coverage=1 00:16:20.659 --rc genhtml_legend=1 00:16:20.659 --rc geninfo_all_blocks=1 00:16:20.659 --rc geninfo_unexecuted_blocks=1 00:16:20.659 00:16:20.659 ' 00:16:20.659 07:43:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.659 --rc genhtml_branch_coverage=1 00:16:20.659 --rc genhtml_function_coverage=1 00:16:20.659 --rc genhtml_legend=1 00:16:20.659 --rc geninfo_all_blocks=1 00:16:20.659 --rc geninfo_unexecuted_blocks=1 00:16:20.659 00:16:20.659 ' 00:16:20.659 07:43:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:20.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.659 --rc genhtml_branch_coverage=1 00:16:20.659 --rc genhtml_function_coverage=1 00:16:20.659 --rc genhtml_legend=1 00:16:20.659 --rc geninfo_all_blocks=1 00:16:20.659 --rc geninfo_unexecuted_blocks=1 00:16:20.659 00:16:20.659 ' 00:16:20.659 07:43:46 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.659 07:43:46 -- nvmf/common.sh@7 -- # uname -s 00:16:20.659 07:43:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.659 07:43:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.659 07:43:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.659 07:43:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.659 07:43:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.659 07:43:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.659 07:43:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.659 07:43:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.659 07:43:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.659 07:43:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.659 07:43:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:16:20.659 07:43:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:16:20.659 07:43:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.659 07:43:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.659 07:43:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.659 07:43:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.659 07:43:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.659 07:43:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.659 07:43:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.659 07:43:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.659 07:43:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.659 07:43:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.659 07:43:46 -- paths/export.sh@5 -- # export PATH 00:16:20.659 07:43:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.659 07:43:46 -- nvmf/common.sh@46 -- # : 0 00:16:20.659 07:43:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:20.659 07:43:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:20.659 07:43:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:20.659 07:43:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.659 07:43:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.659 07:43:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:20.659 07:43:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:20.659 07:43:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:20.659 07:43:46 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.659 07:43:46 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.659 07:43:46 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.659 07:43:46 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:20.659 07:43:46 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.659 07:43:46 -- host/timeout.sh@19 -- # nvmftestinit 00:16:20.659 07:43:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:20.659 07:43:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.659 07:43:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:20.659 07:43:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:20.659 07:43:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:20.659 07:43:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.659 07:43:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.659 07:43:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.659 07:43:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:20.659 07:43:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:20.659 07:43:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:20.659 07:43:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:20.659 07:43:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:20.659 07:43:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:20.659 07:43:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.659 07:43:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.659 07:43:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:20.659 07:43:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:20.659 07:43:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.659 07:43:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.659 07:43:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.659 07:43:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.659 07:43:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.659 07:43:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.659 07:43:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.659 07:43:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.659 07:43:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:20.659 07:43:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:20.659 Cannot find device "nvmf_tgt_br" 00:16:20.659 07:43:46 -- nvmf/common.sh@154 -- # true 00:16:20.659 07:43:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.659 Cannot find device "nvmf_tgt_br2" 00:16:20.659 07:43:46 -- nvmf/common.sh@155 -- # true 00:16:20.659 07:43:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:20.659 07:43:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:20.659 Cannot find device "nvmf_tgt_br" 00:16:20.659 07:43:46 -- nvmf/common.sh@157 -- # true 00:16:20.659 07:43:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:20.659 Cannot find device "nvmf_tgt_br2" 00:16:20.659 07:43:46 -- nvmf/common.sh@158 -- # true 00:16:20.659 07:43:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:20.659 07:43:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:20.659 07:43:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.659 07:43:46 -- nvmf/common.sh@161 -- # true 00:16:20.659 07:43:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.659 07:43:46 -- nvmf/common.sh@162 -- # true 00:16:20.659 07:43:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.659 07:43:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.659 07:43:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.659 07:43:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.659 07:43:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.659 07:43:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.659 07:43:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.659 07:43:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.659 07:43:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.919 07:43:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:20.919 07:43:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:20.919 07:43:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:20.919 07:43:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:20.919 07:43:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.919 07:43:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.919 07:43:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.919 07:43:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:20.919 07:43:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:20.919 07:43:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.919 07:43:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.919 07:43:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.919 07:43:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.919 07:43:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.919 07:43:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:20.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:20.919 00:16:20.919 --- 10.0.0.2 ping statistics --- 00:16:20.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.919 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:20.919 07:43:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:20.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:20.919 00:16:20.919 --- 10.0.0.3 ping statistics --- 00:16:20.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.919 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:20.919 07:43:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:20.919 00:16:20.919 --- 10.0.0.1 ping statistics --- 00:16:20.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.920 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:20.920 07:43:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.920 07:43:46 -- nvmf/common.sh@421 -- # return 0 00:16:20.920 07:43:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:20.920 07:43:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.920 07:43:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:20.920 07:43:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:20.920 07:43:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.920 07:43:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:20.920 07:43:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:20.920 07:43:46 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:16:20.920 07:43:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:20.920 07:43:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.920 07:43:46 -- common/autotest_common.sh@10 -- # set +x 00:16:20.920 07:43:46 -- nvmf/common.sh@469 -- # nvmfpid=73403 00:16:20.920 07:43:46 -- nvmf/common.sh@470 -- # waitforlisten 73403 00:16:20.920 07:43:46 -- common/autotest_common.sh@829 -- # '[' -z 73403 ']' 00:16:20.920 07:43:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:20.920 07:43:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.920 07:43:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.920 07:43:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.920 07:43:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.920 07:43:46 -- common/autotest_common.sh@10 -- # set +x 00:16:20.920 [2024-12-02 07:43:46.458781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.920 [2024-12-02 07:43:46.458871] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.179 [2024-12-02 07:43:46.596606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.179 [2024-12-02 07:43:46.647572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:21.179 [2024-12-02 07:43:46.647740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.179 [2024-12-02 07:43:46.647752] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.179 [2024-12-02 07:43:46.647759] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.179 [2024-12-02 07:43:46.647915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.179 [2024-12-02 07:43:46.647943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.113 07:43:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.113 07:43:47 -- common/autotest_common.sh@862 -- # return 0 00:16:22.113 07:43:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:22.113 07:43:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.113 07:43:47 -- common/autotest_common.sh@10 -- # set +x 00:16:22.113 07:43:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.113 07:43:47 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.113 07:43:47 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:22.373 [2024-12-02 07:43:47.765552] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.373 07:43:47 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:22.632 Malloc0 00:16:22.632 07:43:48 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:22.632 07:43:48 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.891 07:43:48 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.149 [2024-12-02 07:43:48.676001] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.149 07:43:48 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:16:23.149 07:43:48 -- host/timeout.sh@32 -- # bdevperf_pid=73452 00:16:23.149 07:43:48 -- host/timeout.sh@34 -- # waitforlisten 73452 /var/tmp/bdevperf.sock 00:16:23.149 07:43:48 -- common/autotest_common.sh@829 -- # '[' -z 73452 ']' 00:16:23.149 07:43:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.149 07:43:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.149 07:43:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.149 07:43:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.149 07:43:48 -- common/autotest_common.sh@10 -- # set +x 00:16:23.149 [2024-12-02 07:43:48.729265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:23.149 [2024-12-02 07:43:48.729372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73452 ] 00:16:23.408 [2024-12-02 07:43:48.859317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.408 [2024-12-02 07:43:48.913365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.342 07:43:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.342 07:43:49 -- common/autotest_common.sh@862 -- # return 0 00:16:24.342 07:43:49 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:24.342 07:43:49 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:16:24.600 NVMe0n1 00:16:24.600 07:43:50 -- host/timeout.sh@51 -- # rpc_pid=73480 00:16:24.600 07:43:50 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:24.600 07:43:50 -- host/timeout.sh@53 -- # sleep 1 00:16:24.859 Running I/O for 10 seconds... 00:16:25.793 07:43:51 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.054 [2024-12-02 07:43:51.457690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62480 is same with the state(5) to be set 00:16:26.055 [2024-12-02 07:43:51.457913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.457941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.457977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.458982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.458993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.055 [2024-12-02 07:43:51.459089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.055 [2024-12-02 07:43:51.459128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.055 [2024-12-02 07:43:51.459549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.459963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.055 [2024-12-02 07:43:51.459972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.055 [2024-12-02 07:43:51.460076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.460091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.460112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.460252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.460379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.460401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.460422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.460552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.460576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.460719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.460840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.460863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.460874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.461003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.461016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.461026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.461037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.461166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.461180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.461287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.461464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.461487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.461631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.461749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.461765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.461775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.461999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.462590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.462611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.462825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.462873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.462892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.462912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.462922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.463065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.463186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.463206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.463353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.056 [2024-12-02 07:43:51.463426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.463443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.463453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.056 [2024-12-02 07:43:51.463464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.056 [2024-12-02 07:43:51.463473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.463484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.463493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.463504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.463643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.463774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.463795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.463915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.463933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.463945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.464049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.464073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.464204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.464359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.464520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.464673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.464808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.464947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.465088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.465225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.465986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.465995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.466141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.466270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.466464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.466552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.466573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.466594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.466615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.466767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.057 [2024-12-02 07:43:51.466850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.466870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.466890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.466909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.466929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.466939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.467073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.467194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.467216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.467232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.467445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.467460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.467470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.057 [2024-12-02 07:43:51.467482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.057 [2024-12-02 07:43:51.467491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.058 [2024-12-02 07:43:51.467512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.058 [2024-12-02 07:43:51.467752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.467776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.467796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.058 [2024-12-02 07:43:51.467815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.467835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.467854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.467988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:26.058 [2024-12-02 07:43:51.468003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.058 [2024-12-02 07:43:51.468844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(5) to be set 00:16:26.058 [2024-12-02 07:43:51.468866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:26.058 [2024-12-02 07:43:51.468874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:26.058 [2024-12-02 07:43:51.468882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:8 PRP1 0x0 PRP2 0x0 00:16:26.058 [2024-12-02 07:43:51.468891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.468942] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8db0c0 was disconnected and freed. reset controller. 00:16:26.058 [2024-12-02 07:43:51.469333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.058 [2024-12-02 07:43:51.469362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.469374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.058 [2024-12-02 07:43:51.469384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.469393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.058 [2024-12-02 07:43:51.469406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.469415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.058 [2024-12-02 07:43:51.469425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.058 [2024-12-02 07:43:51.469433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878010 is same with the state(5) to be set 00:16:26.058 [2024-12-02 07:43:51.469835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.058 [2024-12-02 07:43:51.469870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878010 (9): Bad file descriptor 00:16:26.058 [2024-12-02 07:43:51.470156] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.058 [2024-12-02 07:43:51.470232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.058 [2024-12-02 07:43:51.470509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.058 [2024-12-02 07:43:51.470542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878010 with addr=10.0.0.2, port=4420 00:16:26.058 [2024-12-02 07:43:51.470554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878010 is same with the state(5) to be set 00:16:26.058 [2024-12-02 07:43:51.470576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878010 (9): Bad file descriptor 00:16:26.058 [2024-12-02 07:43:51.470598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:26.058 [2024-12-02 07:43:51.470608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:26.058 [2024-12-02 07:43:51.470891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:26.058 [2024-12-02 07:43:51.470916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:26.058 [2024-12-02 07:43:51.470929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.058 07:43:51 -- host/timeout.sh@56 -- # sleep 2 00:16:27.962 [2024-12-02 07:43:53.471001] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.962 [2024-12-02 07:43:53.471102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.962 [2024-12-02 07:43:53.471141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.962 [2024-12-02 07:43:53.471157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878010 with addr=10.0.0.2, port=4420 00:16:27.962 [2024-12-02 07:43:53.471167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878010 is same with the state(5) to be set 00:16:27.962 [2024-12-02 07:43:53.471186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878010 (9): Bad file descriptor 00:16:27.962 [2024-12-02 07:43:53.471202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:27.962 [2024-12-02 07:43:53.471210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:27.962 [2024-12-02 07:43:53.471219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:27.962 [2024-12-02 07:43:53.471239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:27.962 [2024-12-02 07:43:53.471249] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:27.962 07:43:53 -- host/timeout.sh@57 -- # get_controller 00:16:27.962 07:43:53 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:27.962 07:43:53 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:16:28.241 07:43:53 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:16:28.241 07:43:53 -- host/timeout.sh@58 -- # get_bdev 00:16:28.241 07:43:53 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:16:28.241 07:43:53 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:16:28.512 07:43:53 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:16:28.512 07:43:53 -- host/timeout.sh@61 -- # sleep 5 00:16:29.887 [2024-12-02 07:43:55.471343] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:29.887 [2024-12-02 07:43:55.471458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:29.887 [2024-12-02 07:43:55.471500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:29.887 [2024-12-02 07:43:55.471515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878010 with addr=10.0.0.2, port=4420 00:16:29.887 [2024-12-02 07:43:55.471528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878010 is same with the state(5) to be set 00:16:29.887 [2024-12-02 07:43:55.471549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878010 (9): Bad file descriptor 00:16:29.887 [2024-12-02 07:43:55.471567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:29.887 [2024-12-02 07:43:55.471576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:29.887 [2024-12-02 07:43:55.471584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:29.887 [2024-12-02 07:43:55.471608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:29.887 [2024-12-02 07:43:55.471618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:32.417 [2024-12-02 07:43:57.471835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:32.417 [2024-12-02 07:43:57.471887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:32.417 [2024-12-02 07:43:57.471913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:32.417 [2024-12-02 07:43:57.471921] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:16:32.417 [2024-12-02 07:43:57.471941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:32.985 00:16:32.985 Latency(us) 00:16:32.985 [2024-12-02T07:43:58.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.985 [2024-12-02T07:43:58.609Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:32.985 Verification LBA range: start 0x0 length 0x4000 00:16:32.985 NVMe0n1 : 8.16 2170.16 8.48 15.68 0.00 58544.10 2829.96 7046430.72 00:16:32.985 [2024-12-02T07:43:58.609Z] =================================================================================================================== 00:16:32.985 [2024-12-02T07:43:58.609Z] Total : 2170.16 8.48 15.68 0.00 58544.10 2829.96 7046430.72 00:16:32.985 0 00:16:33.553 07:43:58 -- host/timeout.sh@62 -- # get_controller 00:16:33.553 07:43:59 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:33.553 07:43:59 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:16:33.812 07:43:59 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:16:33.812 07:43:59 -- host/timeout.sh@63 -- # get_bdev 00:16:33.812 07:43:59 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:16:33.812 07:43:59 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:16:34.071 07:43:59 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:16:34.071 07:43:59 -- host/timeout.sh@65 -- # wait 73480 00:16:34.071 07:43:59 -- host/timeout.sh@67 -- # killprocess 73452 00:16:34.071 07:43:59 -- common/autotest_common.sh@936 -- # '[' -z 73452 ']' 00:16:34.071 07:43:59 -- common/autotest_common.sh@940 -- # kill -0 73452 00:16:34.071 07:43:59 -- common/autotest_common.sh@941 -- # uname 00:16:34.071 07:43:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.071 07:43:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73452 00:16:34.071 killing process with pid 73452 00:16:34.071 Received shutdown signal, test time was about 9.245296 seconds 00:16:34.071 00:16:34.071 Latency(us) 00:16:34.071 [2024-12-02T07:43:59.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.071 [2024-12-02T07:43:59.695Z] =================================================================================================================== 00:16:34.071 [2024-12-02T07:43:59.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.071 07:43:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:34.071 07:43:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:34.071 07:43:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73452' 00:16:34.071 07:43:59 -- common/autotest_common.sh@955 -- # kill 73452 00:16:34.071 07:43:59 -- common/autotest_common.sh@960 -- # wait 73452 00:16:34.330 07:43:59 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.330 [2024-12-02 07:43:59.905219] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.330 07:43:59 -- host/timeout.sh@74 -- # bdevperf_pid=73598 00:16:34.330 07:43:59 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:16:34.330 07:43:59 -- host/timeout.sh@76 -- # waitforlisten 73598 /var/tmp/bdevperf.sock 00:16:34.330 07:43:59 -- common/autotest_common.sh@829 -- # '[' -z 73598 ']' 00:16:34.330 07:43:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.330 07:43:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.330 07:43:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.330 07:43:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.330 07:43:59 -- common/autotest_common.sh@10 -- # set +x 00:16:34.589 [2024-12-02 07:43:59.963771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:34.589 [2024-12-02 07:43:59.964198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73598 ] 00:16:34.589 [2024-12-02 07:44:00.095040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.589 [2024-12-02 07:44:00.144733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.526 07:44:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.527 07:44:00 -- common/autotest_common.sh@862 -- # return 0 00:16:35.527 07:44:00 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:35.527 07:44:01 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:16:35.785 NVMe0n1 00:16:35.785 07:44:01 -- host/timeout.sh@84 -- # rpc_pid=73622 00:16:35.785 07:44:01 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:35.785 07:44:01 -- host/timeout.sh@86 -- # sleep 1 00:16:36.043 Running I/O for 10 seconds... 00:16:37.010 07:44:02 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.270 [2024-12-02 07:44:02.654589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.654993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.655000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.655007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc27b0 is same with the state(5) to be set 00:16:37.270 [2024-12-02 07:44:02.655174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.270 [2024-12-02 07:44:02.655439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.270 [2024-12-02 07:44:02.655447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.271 [2024-12-02 07:44:02.655635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.271 [2024-12-02 07:44:02.655653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.655803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.655812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.271 [2024-12-02 07:44:02.656582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.271 [2024-12-02 07:44:02.656763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.271 [2024-12-02 07:44:02.656799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.271 [2024-12-02 07:44:02.656837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.271 [2024-12-02 07:44:02.656856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.656987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.656995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.271 [2024-12-02 07:44:02.657005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.271 [2024-12-02 07:44:02.657013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.272 [2024-12-02 07:44:02.657728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.272 [2024-12-02 07:44:02.657746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.272 [2024-12-02 07:44:02.657756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.657764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.657782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.657799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.657961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.657983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.657993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:37.273 [2024-12-02 07:44:02.658357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:37.273 [2024-12-02 07:44:02.658516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.273 [2024-12-02 07:44:02.658526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130c0c0 is same with the state(5) to be set 00:16:37.273 [2024-12-02 07:44:02.658539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:37.273 [2024-12-02 07:44:02.658546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:37.274 [2024-12-02 07:44:02.658555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8744 len:8 PRP1 0x0 PRP2 0x0 00:16:37.274 [2024-12-02 07:44:02.658563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.274 [2024-12-02 07:44:02.658604] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x130c0c0 was disconnected and freed. reset controller. 00:16:37.274 [2024-12-02 07:44:02.658723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.274 [2024-12-02 07:44:02.658753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.274 [2024-12-02 07:44:02.658779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.274 [2024-12-02 07:44:02.658805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.274 [2024-12-02 07:44:02.658814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.274 [2024-12-02 07:44:02.658823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.274 [2024-12-02 07:44:02.658845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.274 [2024-12-02 07:44:02.658854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.274 [2024-12-02 07:44:02.658862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:37.274 [2024-12-02 07:44:02.659925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:37.274 [2024-12-02 07:44:02.659952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:37.274 [2024-12-02 07:44:02.660046] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:37.274 [2024-12-02 07:44:02.660107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:37.274 [2024-12-02 07:44:02.660146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:37.274 [2024-12-02 07:44:02.660161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a9010 with addr=10.0.0.2, port=4420 00:16:37.274 [2024-12-02 07:44:02.660171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:37.274 [2024-12-02 07:44:02.660189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:37.274 [2024-12-02 07:44:02.660204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:37.274 [2024-12-02 07:44:02.660213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:37.274 [2024-12-02 07:44:02.660223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:37.274 [2024-12-02 07:44:02.660242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:37.274 [2024-12-02 07:44:02.660252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:37.274 07:44:02 -- host/timeout.sh@90 -- # sleep 1 00:16:38.206 [2024-12-02 07:44:03.660324] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:38.206 [2024-12-02 07:44:03.660782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:38.206 [2024-12-02 07:44:03.661060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:38.206 [2024-12-02 07:44:03.661085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a9010 with addr=10.0.0.2, port=4420 00:16:38.206 [2024-12-02 07:44:03.661097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:38.206 [2024-12-02 07:44:03.661120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:38.206 [2024-12-02 07:44:03.661137] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:38.206 [2024-12-02 07:44:03.661146] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:38.206 [2024-12-02 07:44:03.661155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:38.206 [2024-12-02 07:44:03.661175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:38.206 [2024-12-02 07:44:03.661187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:38.206 07:44:03 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.463 [2024-12-02 07:44:03.916231] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.463 07:44:03 -- host/timeout.sh@92 -- # wait 73622 00:16:39.394 [2024-12-02 07:44:04.677910] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:45.957 00:16:45.957 Latency(us) 00:16:45.957 [2024-12-02T07:44:11.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.957 [2024-12-02T07:44:11.581Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:45.957 Verification LBA range: start 0x0 length 0x4000 00:16:45.957 NVMe0n1 : 10.01 10912.69 42.63 0.00 0.00 11709.32 711.21 3019898.88 00:16:45.957 [2024-12-02T07:44:11.581Z] =================================================================================================================== 00:16:45.957 [2024-12-02T07:44:11.581Z] Total : 10912.69 42.63 0.00 0.00 11709.32 711.21 3019898.88 00:16:45.957 0 00:16:45.957 07:44:11 -- host/timeout.sh@97 -- # rpc_pid=73732 00:16:45.957 07:44:11 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.957 07:44:11 -- host/timeout.sh@98 -- # sleep 1 00:16:46.216 Running I/O for 10 seconds... 00:16:47.152 07:44:12 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.414 [2024-12-02 07:44:12.797224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.797518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.797731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.797850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.797936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.798982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.799988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc14a0 is same with the state(5) to be set 00:16:47.414 [2024-12-02 07:44:12.800047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.414 [2024-12-02 07:44:12.800589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.414 [2024-12-02 07:44:12.800598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.800740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.800760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.800817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.800836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.800855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.800865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.801204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.801243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.801377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.801456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.801480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.801933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.801953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.801987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.801997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.802006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.802025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.802043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.802062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.802083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.802102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.415 [2024-12-02 07:44:12.802121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.802140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.415 [2024-12-02 07:44:12.802150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.415 [2024-12-02 07:44:12.802158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.802781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.802791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.802800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.803140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.803181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.803271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.416 [2024-12-02 07:44:12.803290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.416 [2024-12-02 07:44:12.803418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.416 [2024-12-02 07:44:12.803428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.417 [2024-12-02 07:44:12.803498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.417 [2024-12-02 07:44:12.803555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.417 [2024-12-02 07:44:12.803629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.417 [2024-12-02 07:44:12.803648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.417 [2024-12-02 07:44:12.803743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.417 [2024-12-02 07:44:12.803799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.417 [2024-12-02 07:44:12.803837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.417 [2024-12-02 07:44:12.803946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.803956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131fc30 is same with the state(5) to be set 00:16:47.417 [2024-12-02 07:44:12.803967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:47.417 [2024-12-02 07:44:12.803974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:47.417 [2024-12-02 07:44:12.803982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7152 len:8 PRP1 0x0 PRP2 0x0 00:16:47.417 [2024-12-02 07:44:12.803990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.804028] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x131fc30 was disconnected and freed. reset controller. 00:16:47.417 [2024-12-02 07:44:12.804096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.417 [2024-12-02 07:44:12.804110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.804120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.417 [2024-12-02 07:44:12.804129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.804137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.417 [2024-12-02 07:44:12.804145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.804154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.417 [2024-12-02 07:44:12.804163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.417 [2024-12-02 07:44:12.804171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:47.417 [2024-12-02 07:44:12.804790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:47.417 [2024-12-02 07:44:12.805229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:47.417 [2024-12-02 07:44:12.805791] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.417 [2024-12-02 07:44:12.806132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.417 [2024-12-02 07:44:12.806507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.417 [2024-12-02 07:44:12.806823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a9010 with addr=10.0.0.2, port=4420 00:16:47.417 [2024-12-02 07:44:12.807179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:47.417 [2024-12-02 07:44:12.807591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:47.417 [2024-12-02 07:44:12.807930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:47.417 [2024-12-02 07:44:12.807948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:47.417 [2024-12-02 07:44:12.808050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:47.417 [2024-12-02 07:44:12.808075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:47.417 [2024-12-02 07:44:12.808088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:47.417 07:44:12 -- host/timeout.sh@101 -- # sleep 3 00:16:48.354 [2024-12-02 07:44:13.808167] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:48.354 [2024-12-02 07:44:13.808612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:48.354 [2024-12-02 07:44:13.808901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:48.354 [2024-12-02 07:44:13.809104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a9010 with addr=10.0.0.2, port=4420 00:16:48.354 [2024-12-02 07:44:13.809460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:48.354 [2024-12-02 07:44:13.809832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:48.354 [2024-12-02 07:44:13.810268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:48.354 [2024-12-02 07:44:13.810836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:48.354 [2024-12-02 07:44:13.811207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:48.354 [2024-12-02 07:44:13.811486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:48.354 [2024-12-02 07:44:13.811690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:49.289 [2024-12-02 07:44:14.812099] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:49.289 [2024-12-02 07:44:14.812538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:49.289 [2024-12-02 07:44:14.812843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:49.289 [2024-12-02 07:44:14.813045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a9010 with addr=10.0.0.2, port=4420 00:16:49.289 [2024-12-02 07:44:14.813413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:49.289 [2024-12-02 07:44:14.813848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:49.290 [2024-12-02 07:44:14.814271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:49.290 [2024-12-02 07:44:14.814733] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:49.290 [2024-12-02 07:44:14.815096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:49.290 [2024-12-02 07:44:14.815392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:49.290 [2024-12-02 07:44:14.815597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:50.223 [2024-12-02 07:44:15.816145] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:50.223 [2024-12-02 07:44:15.816575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:50.223 [2024-12-02 07:44:15.816643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:50.223 [2024-12-02 07:44:15.816661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a9010 with addr=10.0.0.2, port=4420 00:16:50.223 [2024-12-02 07:44:15.816672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(5) to be set 00:16:50.223 [2024-12-02 07:44:15.816854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:16:50.223 [2024-12-02 07:44:15.817013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:50.223 [2024-12-02 07:44:15.817024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:50.223 [2024-12-02 07:44:15.817032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:50.223 [2024-12-02 07:44:15.819466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:50.223 [2024-12-02 07:44:15.819498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:50.223 07:44:15 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.488 [2024-12-02 07:44:16.085839] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.488 07:44:16 -- host/timeout.sh@103 -- # wait 73732 00:16:51.424 [2024-12-02 07:44:16.843262] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:56.699 00:16:56.699 Latency(us) 00:16:56.699 [2024-12-02T07:44:22.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.699 [2024-12-02T07:44:22.323Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:56.699 Verification LBA range: start 0x0 length 0x4000 00:16:56.699 NVMe0n1 : 10.01 9258.12 36.16 6727.02 0.00 7991.77 491.52 3019898.88 00:16:56.699 [2024-12-02T07:44:22.323Z] =================================================================================================================== 00:16:56.699 [2024-12-02T07:44:22.323Z] Total : 9258.12 36.16 6727.02 0.00 7991.77 0.00 3019898.88 00:16:56.699 0 00:16:56.699 07:44:21 -- host/timeout.sh@105 -- # killprocess 73598 00:16:56.699 07:44:21 -- common/autotest_common.sh@936 -- # '[' -z 73598 ']' 00:16:56.699 07:44:21 -- common/autotest_common.sh@940 -- # kill -0 73598 00:16:56.699 07:44:21 -- common/autotest_common.sh@941 -- # uname 00:16:56.699 07:44:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.699 07:44:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73598 00:16:56.699 killing process with pid 73598 00:16:56.699 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.699 00:16:56.699 Latency(us) 00:16:56.699 [2024-12-02T07:44:22.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.699 [2024-12-02T07:44:22.323Z] =================================================================================================================== 00:16:56.699 [2024-12-02T07:44:22.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.699 07:44:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:56.699 07:44:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:56.699 07:44:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73598' 00:16:56.699 07:44:21 -- common/autotest_common.sh@955 -- # kill 73598 00:16:56.699 07:44:21 -- common/autotest_common.sh@960 -- # wait 73598 00:16:56.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.699 07:44:21 -- host/timeout.sh@110 -- # bdevperf_pid=73846 00:16:56.699 07:44:21 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:16:56.699 07:44:21 -- host/timeout.sh@112 -- # waitforlisten 73846 /var/tmp/bdevperf.sock 00:16:56.699 07:44:21 -- common/autotest_common.sh@829 -- # '[' -z 73846 ']' 00:16:56.699 07:44:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.699 07:44:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.699 07:44:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.699 07:44:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.699 07:44:21 -- common/autotest_common.sh@10 -- # set +x 00:16:56.699 [2024-12-02 07:44:21.937080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:56.699 [2024-12-02 07:44:21.937557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73846 ] 00:16:56.699 [2024-12-02 07:44:22.069393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.699 [2024-12-02 07:44:22.119410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.636 07:44:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.637 07:44:22 -- common/autotest_common.sh@862 -- # return 0 00:16:57.637 07:44:22 -- host/timeout.sh@116 -- # dtrace_pid=73862 00:16:57.637 07:44:22 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 73846 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:16:57.637 07:44:22 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:16:57.637 07:44:23 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:16:57.896 NVMe0n1 00:16:57.896 07:44:23 -- host/timeout.sh@124 -- # rpc_pid=73898 00:16:57.896 07:44:23 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.896 07:44:23 -- host/timeout.sh@125 -- # sleep 1 00:16:58.154 Running I/O for 10 seconds... 00:16:59.095 07:44:24 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.095 [2024-12-02 07:44:24.702803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.095 [2024-12-02 07:44:24.703520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.703996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.704003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.704010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.704016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d767f0 is same with the state(5) to be set 00:16:59.096 [2024-12-02 07:44:24.704097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.096 [2024-12-02 07:44:24.704125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.096 [2024-12-02 07:44:24.704145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.096 [2024-12-02 07:44:24.704153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.096 [2024-12-02 07:44:24.704163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.096 [2024-12-02 07:44:24.704171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.096 [2024-12-02 07:44:24.704182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.096 [2024-12-02 07:44:24.704190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.096 [2024-12-02 07:44:24.704199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.096 [2024-12-02 07:44:24.704207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.096 [2024-12-02 07:44:24.704218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.096 [2024-12-02 07:44:24.704227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.096 [2024-12-02 07:44:24.704236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.096 [2024-12-02 07:44:24.704244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.096 [2024-12-02 07:44:24.704253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.097 [2024-12-02 07:44:24.704972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.097 [2024-12-02 07:44:24.704982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.704990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.098 [2024-12-02 07:44:24.705695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.098 [2024-12-02 07:44:24.705704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.705983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.705991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.099 [2024-12-02 07:44:24.706280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.099 [2024-12-02 07:44:24.706290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.706339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.706358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.706403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.706424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.706443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.706462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.706499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.100 [2024-12-02 07:44:24.706508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.707853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6370c0 is same with the state(5) to be set 00:16:59.100 [2024-12-02 07:44:24.708284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.100 [2024-12-02 07:44:24.708503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.100 [2024-12-02 07:44:24.708654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40392 len:8 PRP1 0x0 PRP2 0x0 00:16:59.100 [2024-12-02 07:44:24.709210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.709707] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6370c0 was disconnected and freed. reset controller. 00:16:59.100 [2024-12-02 07:44:24.710261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.100 [2024-12-02 07:44:24.710666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.711196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.100 [2024-12-02 07:44:24.711585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.712137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.100 [2024-12-02 07:44:24.712501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.712992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.100 [2024-12-02 07:44:24.713478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.100 [2024-12-02 07:44:24.713805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4010 is same with the state(5) to be set 00:16:59.100 [2024-12-02 07:44:24.714153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:59.100 [2024-12-02 07:44:24.714195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:16:59.100 [2024-12-02 07:44:24.714351] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:59.100 [2024-12-02 07:44:24.714439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:59.100 [2024-12-02 07:44:24.714488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:59.100 [2024-12-02 07:44:24.714506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4010 with addr=10.0.0.2, port=4420 00:16:59.100 [2024-12-02 07:44:24.714517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4010 is same with the state(5) to be set 00:16:59.100 [2024-12-02 07:44:24.714537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:16:59.100 [2024-12-02 07:44:24.714554] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:59.100 [2024-12-02 07:44:24.714563] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:59.100 [2024-12-02 07:44:24.714573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.100 [2024-12-02 07:44:24.714594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:59.100 [2024-12-02 07:44:24.714605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:59.359 07:44:24 -- host/timeout.sh@128 -- # wait 73898 00:17:01.261 [2024-12-02 07:44:26.714708] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.261 [2024-12-02 07:44:26.714817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.261 [2024-12-02 07:44:26.714861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:01.261 [2024-12-02 07:44:26.714877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4010 with addr=10.0.0.2, port=4420 00:17:01.261 [2024-12-02 07:44:26.714891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4010 is same with the state(5) to be set 00:17:01.261 [2024-12-02 07:44:26.714910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:17:01.261 [2024-12-02 07:44:26.714926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:01.261 [2024-12-02 07:44:26.714935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:01.261 [2024-12-02 07:44:26.714943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:01.261 [2024-12-02 07:44:26.714963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:01.261 [2024-12-02 07:44:26.714972] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.198 [2024-12-02 07:44:28.715098] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.198 [2024-12-02 07:44:28.715190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.198 [2024-12-02 07:44:28.715232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.198 [2024-12-02 07:44:28.715248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d4010 with addr=10.0.0.2, port=4420 00:17:03.198 [2024-12-02 07:44:28.715259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4010 is same with the state(5) to be set 00:17:03.198 [2024-12-02 07:44:28.715279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4010 (9): Bad file descriptor 00:17:03.198 [2024-12-02 07:44:28.715295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:03.198 [2024-12-02 07:44:28.715303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:03.198 [2024-12-02 07:44:28.715324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.198 [2024-12-02 07:44:28.715347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.198 [2024-12-02 07:44:28.715357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:05.102 [2024-12-02 07:44:30.715419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:05.102 [2024-12-02 07:44:30.715467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:05.102 [2024-12-02 07:44:30.715494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:05.102 [2024-12-02 07:44:30.715503] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:05.102 [2024-12-02 07:44:30.715526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:06.479 00:17:06.479 Latency(us) 00:17:06.479 [2024-12-02T07:44:32.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.479 [2024-12-02T07:44:32.103Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:17:06.479 NVMe0n1 : 8.13 2420.75 9.46 15.74 0.00 52611.17 6791.91 7046430.72 00:17:06.479 [2024-12-02T07:44:32.103Z] =================================================================================================================== 00:17:06.479 [2024-12-02T07:44:32.103Z] Total : 2420.75 9.46 15.74 0.00 52611.17 6791.91 7046430.72 00:17:06.479 0 00:17:06.479 07:44:31 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:06.479 Attaching 5 probes... 00:17:06.479 1318.270209: reset bdev controller NVMe0 00:17:06.479 1318.389557: reconnect bdev controller NVMe0 00:17:06.479 3318.753150: reconnect delay bdev controller NVMe0 00:17:06.479 3318.769958: reconnect bdev controller NVMe0 00:17:06.479 5319.120678: reconnect delay bdev controller NVMe0 00:17:06.479 5319.152182: reconnect bdev controller NVMe0 00:17:06.479 7319.508616: reconnect delay bdev controller NVMe0 00:17:06.479 7319.525487: reconnect bdev controller NVMe0 00:17:06.479 07:44:31 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:17:06.479 07:44:31 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:17:06.479 07:44:31 -- host/timeout.sh@136 -- # kill 73862 00:17:06.479 07:44:31 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:06.479 07:44:31 -- host/timeout.sh@139 -- # killprocess 73846 00:17:06.479 07:44:31 -- common/autotest_common.sh@936 -- # '[' -z 73846 ']' 00:17:06.479 07:44:31 -- common/autotest_common.sh@940 -- # kill -0 73846 00:17:06.479 07:44:31 -- common/autotest_common.sh@941 -- # uname 00:17:06.479 07:44:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.479 07:44:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73846 00:17:06.479 killing process with pid 73846 00:17:06.479 Received shutdown signal, test time was about 8.201335 seconds 00:17:06.479 00:17:06.479 Latency(us) 00:17:06.479 [2024-12-02T07:44:32.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.479 [2024-12-02T07:44:32.103Z] =================================================================================================================== 00:17:06.479 [2024-12-02T07:44:32.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.479 07:44:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:06.479 07:44:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:06.479 07:44:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73846' 00:17:06.479 07:44:31 -- common/autotest_common.sh@955 -- # kill 73846 00:17:06.479 07:44:31 -- common/autotest_common.sh@960 -- # wait 73846 00:17:06.479 07:44:31 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.738 07:44:32 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:17:06.738 07:44:32 -- host/timeout.sh@145 -- # nvmftestfini 00:17:06.738 07:44:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.738 07:44:32 -- nvmf/common.sh@116 -- # sync 00:17:06.738 07:44:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:06.738 07:44:32 -- nvmf/common.sh@119 -- # set +e 00:17:06.738 07:44:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.738 07:44:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:06.738 rmmod nvme_tcp 00:17:06.739 rmmod nvme_fabrics 00:17:06.739 rmmod nvme_keyring 00:17:06.739 07:44:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.739 07:44:32 -- nvmf/common.sh@123 -- # set -e 00:17:06.739 07:44:32 -- nvmf/common.sh@124 -- # return 0 00:17:06.739 07:44:32 -- nvmf/common.sh@477 -- # '[' -n 73403 ']' 00:17:06.739 07:44:32 -- nvmf/common.sh@478 -- # killprocess 73403 00:17:06.739 07:44:32 -- common/autotest_common.sh@936 -- # '[' -z 73403 ']' 00:17:06.739 07:44:32 -- common/autotest_common.sh@940 -- # kill -0 73403 00:17:06.739 07:44:32 -- common/autotest_common.sh@941 -- # uname 00:17:06.739 07:44:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.739 07:44:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73403 00:17:06.739 killing process with pid 73403 00:17:06.739 07:44:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:06.739 07:44:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:06.739 07:44:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73403' 00:17:06.739 07:44:32 -- common/autotest_common.sh@955 -- # kill 73403 00:17:06.739 07:44:32 -- common/autotest_common.sh@960 -- # wait 73403 00:17:06.998 07:44:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.998 07:44:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:06.998 07:44:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:06.998 07:44:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.998 07:44:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:06.998 07:44:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.998 07:44:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.998 07:44:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.998 07:44:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:06.998 ************************************ 00:17:06.998 END TEST nvmf_timeout 00:17:06.998 ************************************ 00:17:06.998 00:17:06.998 real 0m46.690s 00:17:06.998 user 2m17.255s 00:17:06.998 sys 0m5.131s 00:17:06.998 07:44:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:06.998 07:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:06.998 07:44:32 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:17:06.998 07:44:32 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:17:06.998 07:44:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:06.998 07:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:07.257 07:44:32 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:17:07.257 ************************************ 00:17:07.257 END TEST nvmf_tcp 00:17:07.257 ************************************ 00:17:07.257 00:17:07.257 real 10m23.254s 00:17:07.257 user 29m2.410s 00:17:07.257 sys 3m20.268s 00:17:07.257 07:44:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:07.257 07:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:07.257 07:44:32 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:17:07.257 07:44:32 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:17:07.257 07:44:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:07.257 07:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.257 07:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:07.257 ************************************ 00:17:07.257 START TEST nvmf_dif 00:17:07.257 ************************************ 00:17:07.257 07:44:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:17:07.257 * Looking for test storage... 00:17:07.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:07.257 07:44:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:07.257 07:44:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:07.257 07:44:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:07.257 07:44:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:07.257 07:44:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:07.257 07:44:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:07.257 07:44:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:07.257 07:44:32 -- scripts/common.sh@335 -- # IFS=.-: 00:17:07.257 07:44:32 -- scripts/common.sh@335 -- # read -ra ver1 00:17:07.257 07:44:32 -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.257 07:44:32 -- scripts/common.sh@336 -- # read -ra ver2 00:17:07.257 07:44:32 -- scripts/common.sh@337 -- # local 'op=<' 00:17:07.257 07:44:32 -- scripts/common.sh@339 -- # ver1_l=2 00:17:07.257 07:44:32 -- scripts/common.sh@340 -- # ver2_l=1 00:17:07.257 07:44:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:07.257 07:44:32 -- scripts/common.sh@343 -- # case "$op" in 00:17:07.257 07:44:32 -- scripts/common.sh@344 -- # : 1 00:17:07.257 07:44:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:07.257 07:44:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.257 07:44:32 -- scripts/common.sh@364 -- # decimal 1 00:17:07.257 07:44:32 -- scripts/common.sh@352 -- # local d=1 00:17:07.257 07:44:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.257 07:44:32 -- scripts/common.sh@354 -- # echo 1 00:17:07.257 07:44:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:07.257 07:44:32 -- scripts/common.sh@365 -- # decimal 2 00:17:07.257 07:44:32 -- scripts/common.sh@352 -- # local d=2 00:17:07.257 07:44:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.257 07:44:32 -- scripts/common.sh@354 -- # echo 2 00:17:07.257 07:44:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:07.257 07:44:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:07.257 07:44:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:07.257 07:44:32 -- scripts/common.sh@367 -- # return 0 00:17:07.257 07:44:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.257 07:44:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:07.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.257 --rc genhtml_branch_coverage=1 00:17:07.257 --rc genhtml_function_coverage=1 00:17:07.257 --rc genhtml_legend=1 00:17:07.257 --rc geninfo_all_blocks=1 00:17:07.257 --rc geninfo_unexecuted_blocks=1 00:17:07.257 00:17:07.257 ' 00:17:07.257 07:44:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:07.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.257 --rc genhtml_branch_coverage=1 00:17:07.257 --rc genhtml_function_coverage=1 00:17:07.257 --rc genhtml_legend=1 00:17:07.257 --rc geninfo_all_blocks=1 00:17:07.257 --rc geninfo_unexecuted_blocks=1 00:17:07.257 00:17:07.257 ' 00:17:07.517 07:44:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:07.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.517 --rc genhtml_branch_coverage=1 00:17:07.517 --rc genhtml_function_coverage=1 00:17:07.517 --rc genhtml_legend=1 00:17:07.517 --rc geninfo_all_blocks=1 00:17:07.517 --rc geninfo_unexecuted_blocks=1 00:17:07.517 00:17:07.517 ' 00:17:07.517 07:44:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:07.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.517 --rc genhtml_branch_coverage=1 00:17:07.517 --rc genhtml_function_coverage=1 00:17:07.517 --rc genhtml_legend=1 00:17:07.517 --rc geninfo_all_blocks=1 00:17:07.517 --rc geninfo_unexecuted_blocks=1 00:17:07.517 00:17:07.517 ' 00:17:07.517 07:44:32 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.517 07:44:32 -- nvmf/common.sh@7 -- # uname -s 00:17:07.517 07:44:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.517 07:44:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.517 07:44:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.517 07:44:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.517 07:44:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.517 07:44:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.517 07:44:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.517 07:44:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.517 07:44:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.517 07:44:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.517 07:44:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:17:07.517 07:44:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:17:07.517 07:44:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.517 07:44:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.517 07:44:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.517 07:44:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.517 07:44:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.517 07:44:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.517 07:44:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.517 07:44:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.517 07:44:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.517 07:44:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.517 07:44:32 -- paths/export.sh@5 -- # export PATH 00:17:07.517 07:44:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.517 07:44:32 -- nvmf/common.sh@46 -- # : 0 00:17:07.517 07:44:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:07.517 07:44:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:07.517 07:44:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:07.517 07:44:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.517 07:44:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.517 07:44:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:07.517 07:44:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:07.517 07:44:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:07.517 07:44:32 -- target/dif.sh@15 -- # NULL_META=16 00:17:07.517 07:44:32 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:17:07.517 07:44:32 -- target/dif.sh@15 -- # NULL_SIZE=64 00:17:07.517 07:44:32 -- target/dif.sh@15 -- # NULL_DIF=1 00:17:07.517 07:44:32 -- target/dif.sh@135 -- # nvmftestinit 00:17:07.517 07:44:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:07.517 07:44:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.517 07:44:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:07.517 07:44:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:07.517 07:44:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:07.517 07:44:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.517 07:44:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:17:07.517 07:44:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.517 07:44:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:07.517 07:44:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:07.517 07:44:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:07.517 07:44:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:07.517 07:44:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:07.517 07:44:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:07.517 07:44:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.517 07:44:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.517 07:44:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:07.517 07:44:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:07.517 07:44:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.517 07:44:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.517 07:44:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.517 07:44:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.517 07:44:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.517 07:44:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.517 07:44:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.517 07:44:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.517 07:44:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:07.517 07:44:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:07.517 Cannot find device "nvmf_tgt_br" 00:17:07.517 07:44:32 -- nvmf/common.sh@154 -- # true 00:17:07.517 07:44:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.517 Cannot find device "nvmf_tgt_br2" 00:17:07.517 07:44:32 -- nvmf/common.sh@155 -- # true 00:17:07.518 07:44:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:07.518 07:44:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:07.518 Cannot find device "nvmf_tgt_br" 00:17:07.518 07:44:32 -- nvmf/common.sh@157 -- # true 00:17:07.518 07:44:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:07.518 Cannot find device "nvmf_tgt_br2" 00:17:07.518 07:44:32 -- nvmf/common.sh@158 -- # true 00:17:07.518 07:44:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:07.518 07:44:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:07.518 07:44:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.518 07:44:33 -- nvmf/common.sh@161 -- # true 00:17:07.518 07:44:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.518 07:44:33 -- nvmf/common.sh@162 -- # true 00:17:07.518 07:44:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.518 07:44:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.518 07:44:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.518 07:44:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.518 07:44:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.518 07:44:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.518 07:44:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.518 07:44:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.518 07:44:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:07.518 07:44:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:07.518 07:44:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:07.518 07:44:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:07.518 07:44:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:07.518 07:44:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.518 07:44:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.777 07:44:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.777 07:44:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:07.777 07:44:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:07.777 07:44:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.777 07:44:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.777 07:44:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.777 07:44:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.777 07:44:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.777 07:44:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:07.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:07.777 00:17:07.777 --- 10.0.0.2 ping statistics --- 00:17:07.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.777 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:07.777 07:44:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:07.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:07.777 00:17:07.777 --- 10.0.0.3 ping statistics --- 00:17:07.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.777 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:07.777 07:44:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:07.777 00:17:07.777 --- 10.0.0.1 ping statistics --- 00:17:07.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.777 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:07.777 07:44:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.777 07:44:33 -- nvmf/common.sh@421 -- # return 0 00:17:07.777 07:44:33 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:17:07.777 07:44:33 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:08.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:08.036 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:08.036 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:08.036 07:44:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.036 07:44:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.036 07:44:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.036 07:44:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.036 07:44:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.036 07:44:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.036 07:44:33 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:17:08.036 07:44:33 -- target/dif.sh@137 -- # nvmfappstart 00:17:08.036 07:44:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.036 07:44:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.036 07:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:08.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.036 07:44:33 -- nvmf/common.sh@469 -- # nvmfpid=74345 00:17:08.036 07:44:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:08.036 07:44:33 -- nvmf/common.sh@470 -- # waitforlisten 74345 00:17:08.036 07:44:33 -- common/autotest_common.sh@829 -- # '[' -z 74345 ']' 00:17:08.036 07:44:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.036 07:44:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.036 07:44:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.036 07:44:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.036 07:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:08.295 [2024-12-02 07:44:33.678004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.295 [2024-12-02 07:44:33.678263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.295 [2024-12-02 07:44:33.815621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.295 [2024-12-02 07:44:33.863464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.295 [2024-12-02 07:44:33.863604] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.295 [2024-12-02 07:44:33.863617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.295 [2024-12-02 07:44:33.863624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.295 [2024-12-02 07:44:33.863653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.232 07:44:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.232 07:44:34 -- common/autotest_common.sh@862 -- # return 0 00:17:09.232 07:44:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.232 07:44:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.232 07:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:09.232 07:44:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.232 07:44:34 -- target/dif.sh@139 -- # create_transport 00:17:09.232 07:44:34 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:17:09.232 07:44:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.232 07:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:09.232 [2024-12-02 07:44:34.718766] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.232 07:44:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.232 07:44:34 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:17:09.232 07:44:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:09.232 07:44:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.232 07:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:09.232 ************************************ 00:17:09.232 START TEST fio_dif_1_default 00:17:09.232 ************************************ 00:17:09.232 07:44:34 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:17:09.232 07:44:34 -- target/dif.sh@86 -- # create_subsystems 0 00:17:09.232 07:44:34 -- target/dif.sh@28 -- # local sub 00:17:09.232 07:44:34 -- target/dif.sh@30 -- # for sub in "$@" 00:17:09.232 07:44:34 -- target/dif.sh@31 -- # create_subsystem 0 00:17:09.232 07:44:34 -- target/dif.sh@18 -- # local sub_id=0 00:17:09.232 07:44:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:17:09.232 07:44:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.232 07:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:09.232 bdev_null0 00:17:09.232 07:44:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.232 07:44:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:09.232 07:44:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.232 07:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:09.232 07:44:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.232 07:44:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:09.232 07:44:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.232 07:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:09.232 07:44:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.232 07:44:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:09.232 07:44:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.232 07:44:34 -- common/autotest_common.sh@10 -- # set +x 00:17:09.232 [2024-12-02 07:44:34.766847] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.232 07:44:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.232 07:44:34 -- target/dif.sh@87 -- # fio /dev/fd/62 00:17:09.232 07:44:34 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:17:09.232 07:44:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:17:09.232 07:44:34 -- nvmf/common.sh@520 -- # config=() 00:17:09.232 07:44:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:09.232 07:44:34 -- nvmf/common.sh@520 -- # local subsystem config 00:17:09.232 07:44:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:09.232 07:44:34 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:09.232 07:44:34 -- target/dif.sh@82 -- # gen_fio_conf 00:17:09.232 07:44:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:09.232 { 00:17:09.232 "params": { 00:17:09.232 "name": "Nvme$subsystem", 00:17:09.232 "trtype": "$TEST_TRANSPORT", 00:17:09.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.232 "adrfam": "ipv4", 00:17:09.232 "trsvcid": "$NVMF_PORT", 00:17:09.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.232 "hdgst": ${hdgst:-false}, 00:17:09.232 "ddgst": ${ddgst:-false} 00:17:09.232 }, 00:17:09.232 "method": "bdev_nvme_attach_controller" 00:17:09.232 } 00:17:09.232 EOF 00:17:09.232 )") 00:17:09.232 07:44:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:09.232 07:44:34 -- target/dif.sh@54 -- # local file 00:17:09.232 07:44:34 -- target/dif.sh@56 -- # cat 00:17:09.232 07:44:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:09.232 07:44:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:09.232 07:44:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:09.232 07:44:34 -- common/autotest_common.sh@1330 -- # shift 00:17:09.232 07:44:34 -- nvmf/common.sh@542 -- # cat 00:17:09.232 07:44:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:09.232 07:44:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:09.232 07:44:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:09.232 07:44:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:09.232 07:44:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:09.232 07:44:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:09.232 07:44:34 -- target/dif.sh@72 -- # (( file <= files )) 00:17:09.232 07:44:34 -- nvmf/common.sh@544 -- # jq . 00:17:09.232 07:44:34 -- nvmf/common.sh@545 -- # IFS=, 00:17:09.232 07:44:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:09.232 "params": { 00:17:09.232 "name": "Nvme0", 00:17:09.232 "trtype": "tcp", 00:17:09.232 "traddr": "10.0.0.2", 00:17:09.232 "adrfam": "ipv4", 00:17:09.232 "trsvcid": "4420", 00:17:09.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:09.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:09.233 "hdgst": false, 00:17:09.233 "ddgst": false 00:17:09.233 }, 00:17:09.233 "method": "bdev_nvme_attach_controller" 00:17:09.233 }' 00:17:09.233 07:44:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:09.233 07:44:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:09.233 07:44:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:09.233 07:44:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:09.233 07:44:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:09.233 07:44:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:09.233 07:44:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:09.233 07:44:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:09.233 07:44:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:09.233 07:44:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:09.492 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:09.492 fio-3.35 00:17:09.492 Starting 1 thread 00:17:09.750 [2024-12-02 07:44:35.310144] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:09.750 [2024-12-02 07:44:35.310222] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:21.959 00:17:21.959 filename0: (groupid=0, jobs=1): err= 0: pid=74412: Mon Dec 2 07:44:45 2024 00:17:21.959 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(406MiB/10001msec) 00:17:21.959 slat (nsec): min=5739, max=91222, avg=7421.49, stdev=3198.08 00:17:21.959 clat (usec): min=303, max=4640, avg=362.35, stdev=49.13 00:17:21.959 lat (usec): min=309, max=4665, avg=369.77, stdev=49.83 00:17:21.959 clat percentiles (usec): 00:17:21.959 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:17:21.959 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 367], 00:17:21.959 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 437], 00:17:21.959 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 562], 99.95th=[ 578], 00:17:21.959 | 99.99th=[ 635] 00:17:21.959 bw ( KiB/s): min=38866, max=42624, per=100.00%, avg=41621.16, stdev=864.04, samples=19 00:17:21.959 iops : min= 9716, max=10656, avg=10405.26, stdev=216.10, samples=19 00:17:21.959 lat (usec) : 500=99.24%, 750=0.75% 00:17:21.959 lat (msec) : 4=0.01%, 10=0.01% 00:17:21.959 cpu : usr=85.77%, sys=12.34%, ctx=79, majf=0, minf=9 00:17:21.960 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.960 issued rwts: total=104040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.960 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:21.960 00:17:21.960 Run status group 0 (all jobs): 00:17:21.960 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=406MiB (426MB), run=10001-10001msec 00:17:21.960 07:44:45 -- target/dif.sh@88 -- # destroy_subsystems 0 00:17:21.960 07:44:45 -- target/dif.sh@43 -- # local sub 00:17:21.960 07:44:45 -- target/dif.sh@45 -- # for sub in "$@" 00:17:21.960 07:44:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:21.960 07:44:45 -- target/dif.sh@36 -- # local sub_id=0 00:17:21.960 07:44:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 ************************************ 00:17:21.960 END TEST fio_dif_1_default 00:17:21.960 ************************************ 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 00:17:21.960 real 0m10.872s 00:17:21.960 user 0m9.153s 00:17:21.960 sys 0m1.445s 00:17:21.960 07:44:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 07:44:45 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:17:21.960 07:44:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:21.960 07:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 ************************************ 00:17:21.960 START TEST fio_dif_1_multi_subsystems 00:17:21.960 ************************************ 00:17:21.960 07:44:45 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:17:21.960 07:44:45 -- target/dif.sh@92 -- # local files=1 00:17:21.960 07:44:45 -- target/dif.sh@94 -- # create_subsystems 0 1 00:17:21.960 07:44:45 -- target/dif.sh@28 -- # local sub 00:17:21.960 07:44:45 -- target/dif.sh@30 -- # for sub in "$@" 00:17:21.960 07:44:45 -- target/dif.sh@31 -- # create_subsystem 0 00:17:21.960 07:44:45 -- target/dif.sh@18 -- # local sub_id=0 00:17:21.960 07:44:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 bdev_null0 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 [2024-12-02 07:44:45.689573] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@30 -- # for sub in "$@" 00:17:21.960 07:44:45 -- target/dif.sh@31 -- # create_subsystem 1 00:17:21.960 07:44:45 -- target/dif.sh@18 -- # local sub_id=1 00:17:21.960 07:44:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 bdev_null1 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.960 07:44:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 07:44:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 07:44:45 -- target/dif.sh@95 -- # fio /dev/fd/62 00:17:21.960 07:44:45 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:17:21.960 07:44:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:17:21.960 07:44:45 -- nvmf/common.sh@520 -- # config=() 00:17:21.960 07:44:45 -- nvmf/common.sh@520 -- # local subsystem config 00:17:21.960 07:44:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:21.960 07:44:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:21.960 07:44:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:21.960 { 00:17:21.960 "params": { 00:17:21.960 "name": "Nvme$subsystem", 00:17:21.960 "trtype": "$TEST_TRANSPORT", 00:17:21.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.960 "adrfam": "ipv4", 00:17:21.960 "trsvcid": "$NVMF_PORT", 00:17:21.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.960 "hdgst": ${hdgst:-false}, 00:17:21.960 "ddgst": ${ddgst:-false} 00:17:21.960 }, 00:17:21.960 "method": "bdev_nvme_attach_controller" 00:17:21.960 } 00:17:21.960 EOF 00:17:21.960 )") 00:17:21.960 07:44:45 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:21.960 07:44:45 -- target/dif.sh@82 -- # gen_fio_conf 00:17:21.960 07:44:45 -- target/dif.sh@54 -- # local file 00:17:21.960 07:44:45 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:21.960 07:44:45 -- target/dif.sh@56 -- # cat 00:17:21.960 07:44:45 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:21.960 07:44:45 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:21.960 07:44:45 -- nvmf/common.sh@542 -- # cat 00:17:21.960 07:44:45 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.960 07:44:45 -- common/autotest_common.sh@1330 -- # shift 00:17:21.960 07:44:45 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:21.960 07:44:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.960 07:44:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:21.960 07:44:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.960 07:44:45 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:21.960 07:44:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:21.960 07:44:45 -- target/dif.sh@72 -- # (( file <= files )) 00:17:21.960 07:44:45 -- target/dif.sh@73 -- # cat 00:17:21.960 07:44:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:21.960 07:44:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:21.960 { 00:17:21.960 "params": { 00:17:21.960 "name": "Nvme$subsystem", 00:17:21.960 "trtype": "$TEST_TRANSPORT", 00:17:21.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.960 "adrfam": "ipv4", 00:17:21.960 "trsvcid": "$NVMF_PORT", 00:17:21.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.960 "hdgst": ${hdgst:-false}, 00:17:21.960 "ddgst": ${ddgst:-false} 00:17:21.960 }, 00:17:21.960 "method": "bdev_nvme_attach_controller" 00:17:21.960 } 00:17:21.960 EOF 00:17:21.960 )") 00:17:21.960 07:44:45 -- target/dif.sh@72 -- # (( file++ )) 00:17:21.960 07:44:45 -- target/dif.sh@72 -- # (( file <= files )) 00:17:21.960 07:44:45 -- nvmf/common.sh@542 -- # cat 00:17:21.960 07:44:45 -- nvmf/common.sh@544 -- # jq . 00:17:21.960 07:44:45 -- nvmf/common.sh@545 -- # IFS=, 00:17:21.960 07:44:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:21.960 "params": { 00:17:21.960 "name": "Nvme0", 00:17:21.960 "trtype": "tcp", 00:17:21.960 "traddr": "10.0.0.2", 00:17:21.960 "adrfam": "ipv4", 00:17:21.960 "trsvcid": "4420", 00:17:21.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:21.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:21.960 "hdgst": false, 00:17:21.961 "ddgst": false 00:17:21.961 }, 00:17:21.961 "method": "bdev_nvme_attach_controller" 00:17:21.961 },{ 00:17:21.961 "params": { 00:17:21.961 "name": "Nvme1", 00:17:21.961 "trtype": "tcp", 00:17:21.961 "traddr": "10.0.0.2", 00:17:21.961 "adrfam": "ipv4", 00:17:21.961 "trsvcid": "4420", 00:17:21.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.961 "hdgst": false, 00:17:21.961 "ddgst": false 00:17:21.961 }, 00:17:21.961 "method": "bdev_nvme_attach_controller" 00:17:21.961 }' 00:17:21.961 07:44:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:21.961 07:44:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:21.961 07:44:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.961 07:44:45 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.961 07:44:45 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:21.961 07:44:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:21.961 07:44:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:21.961 07:44:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:21.961 07:44:45 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:21.961 07:44:45 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:21.961 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:21.961 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:21.961 fio-3.35 00:17:21.961 Starting 2 threads 00:17:21.961 [2024-12-02 07:44:46.350111] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:21.961 [2024-12-02 07:44:46.350173] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:31.938 00:17:31.938 filename0: (groupid=0, jobs=1): err= 0: pid=74577: Mon Dec 2 07:44:56 2024 00:17:31.938 read: IOPS=5497, BW=21.5MiB/s (22.5MB/s)(215MiB/10001msec) 00:17:31.938 slat (nsec): min=6511, max=74099, avg=13072.80, stdev=4344.82 00:17:31.938 clat (usec): min=413, max=1229, avg=692.54, stdev=51.43 00:17:31.938 lat (usec): min=421, max=1253, avg=705.61, stdev=52.08 00:17:31.938 clat percentiles (usec): 00:17:31.938 | 1.00th=[ 603], 5.00th=[ 627], 10.00th=[ 635], 20.00th=[ 652], 00:17:31.938 | 30.00th=[ 668], 40.00th=[ 676], 50.00th=[ 685], 60.00th=[ 693], 00:17:31.938 | 70.00th=[ 709], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:17:31.938 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 979], 00:17:31.938 | 99.99th=[ 1057] 00:17:31.938 bw ( KiB/s): min=21344, max=22432, per=50.03%, avg=22000.84, stdev=280.67, samples=19 00:17:31.938 iops : min= 5336, max= 5608, avg=5500.21, stdev=70.17, samples=19 00:17:31.938 lat (usec) : 500=0.01%, 750=88.12%, 1000=11.83% 00:17:31.938 lat (msec) : 2=0.04% 00:17:31.938 cpu : usr=89.94%, sys=8.56%, ctx=12, majf=0, minf=0 00:17:31.938 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:31.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.938 issued rwts: total=54976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.938 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:31.938 filename1: (groupid=0, jobs=1): err= 0: pid=74578: Mon Dec 2 07:44:56 2024 00:17:31.938 read: IOPS=5496, BW=21.5MiB/s (22.5MB/s)(215MiB/10001msec) 00:17:31.938 slat (nsec): min=6173, max=57161, avg=13290.58, stdev=4555.81 00:17:31.938 clat (usec): min=571, max=1217, avg=690.58, stdev=49.30 00:17:31.938 lat (usec): min=581, max=1243, avg=703.87, stdev=50.06 00:17:31.938 clat percentiles (usec): 00:17:31.938 | 1.00th=[ 619], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 652], 00:17:31.938 | 30.00th=[ 660], 40.00th=[ 668], 50.00th=[ 685], 60.00th=[ 693], 00:17:31.938 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:17:31.938 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 955], 00:17:31.938 | 99.99th=[ 1074] 00:17:31.938 bw ( KiB/s): min=21344, max=22432, per=50.03%, avg=21999.16, stdev=283.09, samples=19 00:17:31.938 iops : min= 5336, max= 5608, avg=5499.79, stdev=70.77, samples=19 00:17:31.938 lat (usec) : 750=89.06%, 1000=10.91% 00:17:31.938 lat (msec) : 2=0.02% 00:17:31.938 cpu : usr=89.64%, sys=8.87%, ctx=6, majf=0, minf=0 00:17:31.938 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:31.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.939 issued rwts: total=54972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.939 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:31.939 00:17:31.939 Run status group 0 (all jobs): 00:17:31.939 READ: bw=42.9MiB/s (45.0MB/s), 21.5MiB/s-21.5MiB/s (22.5MB/s-22.5MB/s), io=429MiB (450MB), run=10001-10001msec 00:17:31.939 07:44:56 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:17:31.939 07:44:56 -- target/dif.sh@43 -- # local sub 00:17:31.939 07:44:56 -- target/dif.sh@45 -- # for sub in "$@" 00:17:31.939 07:44:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:31.939 07:44:56 -- target/dif.sh@36 -- # local sub_id=0 00:17:31.939 07:44:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 07:44:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 07:44:56 -- target/dif.sh@45 -- # for sub in "$@" 00:17:31.939 07:44:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:17:31.939 07:44:56 -- target/dif.sh@36 -- # local sub_id=1 00:17:31.939 07:44:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 07:44:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 ************************************ 00:17:31.939 END TEST fio_dif_1_multi_subsystems 00:17:31.939 ************************************ 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 00:17:31.939 real 0m11.013s 00:17:31.939 user 0m18.641s 00:17:31.939 sys 0m1.966s 00:17:31.939 07:44:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 07:44:56 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:17:31.939 07:44:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:31.939 07:44:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 ************************************ 00:17:31.939 START TEST fio_dif_rand_params 00:17:31.939 ************************************ 00:17:31.939 07:44:56 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:17:31.939 07:44:56 -- target/dif.sh@100 -- # local NULL_DIF 00:17:31.939 07:44:56 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:17:31.939 07:44:56 -- target/dif.sh@103 -- # NULL_DIF=3 00:17:31.939 07:44:56 -- target/dif.sh@103 -- # bs=128k 00:17:31.939 07:44:56 -- target/dif.sh@103 -- # numjobs=3 00:17:31.939 07:44:56 -- target/dif.sh@103 -- # iodepth=3 00:17:31.939 07:44:56 -- target/dif.sh@103 -- # runtime=5 00:17:31.939 07:44:56 -- target/dif.sh@105 -- # create_subsystems 0 00:17:31.939 07:44:56 -- target/dif.sh@28 -- # local sub 00:17:31.939 07:44:56 -- target/dif.sh@30 -- # for sub in "$@" 00:17:31.939 07:44:56 -- target/dif.sh@31 -- # create_subsystem 0 00:17:31.939 07:44:56 -- target/dif.sh@18 -- # local sub_id=0 00:17:31.939 07:44:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 bdev_null0 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 07:44:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 07:44:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 07:44:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:31.939 07:44:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.939 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 [2024-12-02 07:44:56.764427] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.939 07:44:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.939 07:44:56 -- target/dif.sh@106 -- # fio /dev/fd/62 00:17:31.939 07:44:56 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:17:31.939 07:44:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:17:31.939 07:44:56 -- nvmf/common.sh@520 -- # config=() 00:17:31.939 07:44:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:31.939 07:44:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:31.939 07:44:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:31.939 07:44:56 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:31.939 07:44:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:31.939 { 00:17:31.939 "params": { 00:17:31.939 "name": "Nvme$subsystem", 00:17:31.939 "trtype": "$TEST_TRANSPORT", 00:17:31.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.939 "adrfam": "ipv4", 00:17:31.939 "trsvcid": "$NVMF_PORT", 00:17:31.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.939 "hdgst": ${hdgst:-false}, 00:17:31.939 "ddgst": ${ddgst:-false} 00:17:31.939 }, 00:17:31.939 "method": "bdev_nvme_attach_controller" 00:17:31.939 } 00:17:31.939 EOF 00:17:31.939 )") 00:17:31.939 07:44:56 -- target/dif.sh@82 -- # gen_fio_conf 00:17:31.939 07:44:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:31.939 07:44:56 -- target/dif.sh@54 -- # local file 00:17:31.939 07:44:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:31.939 07:44:56 -- target/dif.sh@56 -- # cat 00:17:31.939 07:44:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:31.939 07:44:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:31.939 07:44:56 -- common/autotest_common.sh@1330 -- # shift 00:17:31.939 07:44:56 -- nvmf/common.sh@542 -- # cat 00:17:31.939 07:44:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:31.939 07:44:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:31.939 07:44:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:31.939 07:44:56 -- target/dif.sh@72 -- # (( file <= files )) 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:31.939 07:44:56 -- nvmf/common.sh@544 -- # jq . 00:17:31.939 07:44:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:31.939 07:44:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:31.939 "params": { 00:17:31.939 "name": "Nvme0", 00:17:31.939 "trtype": "tcp", 00:17:31.939 "traddr": "10.0.0.2", 00:17:31.939 "adrfam": "ipv4", 00:17:31.939 "trsvcid": "4420", 00:17:31.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:31.939 "hdgst": false, 00:17:31.939 "ddgst": false 00:17:31.939 }, 00:17:31.939 "method": "bdev_nvme_attach_controller" 00:17:31.939 }' 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:31.939 07:44:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:31.939 07:44:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:31.939 07:44:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:31.939 07:44:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:31.939 07:44:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:31.939 07:44:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:31.939 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:17:31.939 ... 00:17:31.939 fio-3.35 00:17:31.939 Starting 3 threads 00:17:31.939 [2024-12-02 07:44:57.290093] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:31.939 [2024-12-02 07:44:57.290173] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:37.215 00:17:37.215 filename0: (groupid=0, jobs=1): err= 0: pid=74728: Mon Dec 2 07:45:02 2024 00:17:37.215 read: IOPS=284, BW=35.5MiB/s (37.3MB/s)(178MiB/5001msec) 00:17:37.215 slat (nsec): min=6351, max=56871, avg=14117.97, stdev=5586.19 00:17:37.215 clat (usec): min=9901, max=12272, avg=10519.15, stdev=467.81 00:17:37.215 lat (usec): min=9907, max=12290, avg=10533.26, stdev=468.63 00:17:37.215 clat percentiles (usec): 00:17:37.215 | 1.00th=[10028], 5.00th=[10028], 10.00th=[10028], 20.00th=[10159], 00:17:37.215 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10552], 00:17:37.215 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:17:37.215 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:17:37.215 | 99.99th=[12256] 00:17:37.215 bw ( KiB/s): min=34560, max=37632, per=33.50%, avg=36583.00, stdev=990.06, samples=9 00:17:37.215 iops : min= 270, max= 294, avg=285.78, stdev= 7.71, samples=9 00:17:37.215 lat (msec) : 10=2.60%, 20=97.40% 00:17:37.215 cpu : usr=91.56%, sys=7.94%, ctx=18, majf=0, minf=0 00:17:37.215 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.215 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:17:37.215 filename0: (groupid=0, jobs=1): err= 0: pid=74729: Mon Dec 2 07:45:02 2024 00:17:37.215 read: IOPS=284, BW=35.6MiB/s (37.3MB/s)(178MiB/5007msec) 00:17:37.215 slat (nsec): min=7054, max=64785, avg=15259.39, stdev=5485.74 00:17:37.215 clat (usec): min=7794, max=12327, avg=10505.85, stdev=477.81 00:17:37.215 lat (usec): min=7823, max=12351, avg=10521.11, stdev=478.57 00:17:37.215 clat percentiles (usec): 00:17:37.215 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10159], 00:17:37.215 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10552], 00:17:37.215 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:17:37.215 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12256], 99.95th=[12387], 00:17:37.215 | 99.99th=[12387] 00:17:37.215 bw ( KiB/s): min=34491, max=37632, per=33.32%, avg=36388.70, stdev=1101.96, samples=10 00:17:37.215 iops : min= 269, max= 294, avg=284.20, stdev= 8.65, samples=10 00:17:37.215 lat (msec) : 10=2.53%, 20=97.47% 00:17:37.215 cpu : usr=92.33%, sys=7.11%, ctx=11, majf=0, minf=0 00:17:37.215 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.215 issued rwts: total=1425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:17:37.215 filename0: (groupid=0, jobs=1): err= 0: pid=74730: Mon Dec 2 07:45:02 2024 00:17:37.215 read: IOPS=284, BW=35.6MiB/s (37.3MB/s)(178MiB/5007msec) 00:17:37.215 slat (nsec): min=6684, max=63827, avg=15246.04, stdev=5253.75 00:17:37.215 clat (usec): min=7764, max=12424, avg=10505.31, stdev=501.78 00:17:37.215 lat (usec): min=7774, max=12447, avg=10520.55, stdev=502.23 00:17:37.215 clat percentiles (usec): 00:17:37.215 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10159], 00:17:37.215 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10552], 00:17:37.215 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:17:37.215 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12387], 99.95th=[12387], 00:17:37.215 | 99.99th=[12387] 00:17:37.215 bw ( KiB/s): min=34491, max=37632, per=33.32%, avg=36388.70, stdev=1101.96, samples=10 00:17:37.215 iops : min= 269, max= 294, avg=284.20, stdev= 8.65, samples=10 00:17:37.215 lat (msec) : 10=2.67%, 20=97.33% 00:17:37.215 cpu : usr=92.59%, sys=6.89%, ctx=8, majf=0, minf=0 00:17:37.215 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.215 issued rwts: total=1425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:17:37.215 00:17:37.215 Run status group 0 (all jobs): 00:17:37.215 READ: bw=107MiB/s (112MB/s), 35.5MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=534MiB (560MB), run=5001-5007msec 00:17:37.215 07:45:02 -- target/dif.sh@107 -- # destroy_subsystems 0 00:17:37.216 07:45:02 -- target/dif.sh@43 -- # local sub 00:17:37.216 07:45:02 -- target/dif.sh@45 -- # for sub in "$@" 00:17:37.216 07:45:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:37.216 07:45:02 -- target/dif.sh@36 -- # local sub_id=0 00:17:37.216 07:45:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@109 -- # NULL_DIF=2 00:17:37.216 07:45:02 -- target/dif.sh@109 -- # bs=4k 00:17:37.216 07:45:02 -- target/dif.sh@109 -- # numjobs=8 00:17:37.216 07:45:02 -- target/dif.sh@109 -- # iodepth=16 00:17:37.216 07:45:02 -- target/dif.sh@109 -- # runtime= 00:17:37.216 07:45:02 -- target/dif.sh@109 -- # files=2 00:17:37.216 07:45:02 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:17:37.216 07:45:02 -- target/dif.sh@28 -- # local sub 00:17:37.216 07:45:02 -- target/dif.sh@30 -- # for sub in "$@" 00:17:37.216 07:45:02 -- target/dif.sh@31 -- # create_subsystem 0 00:17:37.216 07:45:02 -- target/dif.sh@18 -- # local sub_id=0 00:17:37.216 07:45:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 bdev_null0 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 [2024-12-02 07:45:02.637073] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@30 -- # for sub in "$@" 00:17:37.216 07:45:02 -- target/dif.sh@31 -- # create_subsystem 1 00:17:37.216 07:45:02 -- target/dif.sh@18 -- # local sub_id=1 00:17:37.216 07:45:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 bdev_null1 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@30 -- # for sub in "$@" 00:17:37.216 07:45:02 -- target/dif.sh@31 -- # create_subsystem 2 00:17:37.216 07:45:02 -- target/dif.sh@18 -- # local sub_id=2 00:17:37.216 07:45:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 bdev_null2 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:37.216 07:45:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.216 07:45:02 -- common/autotest_common.sh@10 -- # set +x 00:17:37.216 07:45:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.216 07:45:02 -- target/dif.sh@112 -- # fio /dev/fd/62 00:17:37.216 07:45:02 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:17:37.216 07:45:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:17:37.216 07:45:02 -- nvmf/common.sh@520 -- # config=() 00:17:37.216 07:45:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:37.216 07:45:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:37.216 07:45:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:37.216 { 00:17:37.216 "params": { 00:17:37.216 "name": "Nvme$subsystem", 00:17:37.216 "trtype": "$TEST_TRANSPORT", 00:17:37.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.216 "adrfam": "ipv4", 00:17:37.216 "trsvcid": "$NVMF_PORT", 00:17:37.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.216 "hdgst": ${hdgst:-false}, 00:17:37.216 "ddgst": ${ddgst:-false} 00:17:37.216 }, 00:17:37.216 "method": "bdev_nvme_attach_controller" 00:17:37.216 } 00:17:37.216 EOF 00:17:37.216 )") 00:17:37.216 07:45:02 -- target/dif.sh@82 -- # gen_fio_conf 00:17:37.216 07:45:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:37.217 07:45:02 -- target/dif.sh@54 -- # local file 00:17:37.217 07:45:02 -- target/dif.sh@56 -- # cat 00:17:37.217 07:45:02 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:37.217 07:45:02 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:37.217 07:45:02 -- nvmf/common.sh@542 -- # cat 00:17:37.217 07:45:02 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:37.217 07:45:02 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:37.217 07:45:02 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.217 07:45:02 -- common/autotest_common.sh@1330 -- # shift 00:17:37.217 07:45:02 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:37.217 07:45:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.217 07:45:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:37.217 07:45:02 -- target/dif.sh@72 -- # (( file <= files )) 00:17:37.217 07:45:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:37.217 07:45:02 -- target/dif.sh@73 -- # cat 00:17:37.217 07:45:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:37.217 { 00:17:37.217 "params": { 00:17:37.217 "name": "Nvme$subsystem", 00:17:37.217 "trtype": "$TEST_TRANSPORT", 00:17:37.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.217 "adrfam": "ipv4", 00:17:37.217 "trsvcid": "$NVMF_PORT", 00:17:37.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.217 "hdgst": ${hdgst:-false}, 00:17:37.217 "ddgst": ${ddgst:-false} 00:17:37.217 }, 00:17:37.217 "method": "bdev_nvme_attach_controller" 00:17:37.217 } 00:17:37.217 EOF 00:17:37.217 )") 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.217 07:45:02 -- nvmf/common.sh@542 -- # cat 00:17:37.217 07:45:02 -- target/dif.sh@72 -- # (( file++ )) 00:17:37.217 07:45:02 -- target/dif.sh@72 -- # (( file <= files )) 00:17:37.217 07:45:02 -- target/dif.sh@73 -- # cat 00:17:37.217 07:45:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:37.217 07:45:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:37.217 { 00:17:37.217 "params": { 00:17:37.217 "name": "Nvme$subsystem", 00:17:37.217 "trtype": "$TEST_TRANSPORT", 00:17:37.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.217 "adrfam": "ipv4", 00:17:37.217 "trsvcid": "$NVMF_PORT", 00:17:37.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.217 "hdgst": ${hdgst:-false}, 00:17:37.217 "ddgst": ${ddgst:-false} 00:17:37.217 }, 00:17:37.217 "method": "bdev_nvme_attach_controller" 00:17:37.217 } 00:17:37.217 EOF 00:17:37.217 )") 00:17:37.217 07:45:02 -- nvmf/common.sh@542 -- # cat 00:17:37.217 07:45:02 -- target/dif.sh@72 -- # (( file++ )) 00:17:37.217 07:45:02 -- target/dif.sh@72 -- # (( file <= files )) 00:17:37.217 07:45:02 -- nvmf/common.sh@544 -- # jq . 00:17:37.217 07:45:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:37.217 07:45:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:37.217 "params": { 00:17:37.217 "name": "Nvme0", 00:17:37.217 "trtype": "tcp", 00:17:37.217 "traddr": "10.0.0.2", 00:17:37.217 "adrfam": "ipv4", 00:17:37.217 "trsvcid": "4420", 00:17:37.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:37.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:37.217 "hdgst": false, 00:17:37.217 "ddgst": false 00:17:37.217 }, 00:17:37.217 "method": "bdev_nvme_attach_controller" 00:17:37.217 },{ 00:17:37.217 "params": { 00:17:37.217 "name": "Nvme1", 00:17:37.217 "trtype": "tcp", 00:17:37.217 "traddr": "10.0.0.2", 00:17:37.217 "adrfam": "ipv4", 00:17:37.217 "trsvcid": "4420", 00:17:37.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.217 "hdgst": false, 00:17:37.217 "ddgst": false 00:17:37.217 }, 00:17:37.217 "method": "bdev_nvme_attach_controller" 00:17:37.217 },{ 00:17:37.217 "params": { 00:17:37.217 "name": "Nvme2", 00:17:37.217 "trtype": "tcp", 00:17:37.217 "traddr": "10.0.0.2", 00:17:37.217 "adrfam": "ipv4", 00:17:37.217 "trsvcid": "4420", 00:17:37.217 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:37.217 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:37.217 "hdgst": false, 00:17:37.217 "ddgst": false 00:17:37.217 }, 00:17:37.217 "method": "bdev_nvme_attach_controller" 00:17:37.217 }' 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:37.217 07:45:02 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:37.217 07:45:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:37.217 07:45:02 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:37.217 07:45:02 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:37.217 07:45:02 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:37.217 07:45:02 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:37.476 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:17:37.476 ... 00:17:37.476 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:17:37.476 ... 00:17:37.476 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:17:37.476 ... 00:17:37.476 fio-3.35 00:17:37.476 Starting 24 threads 00:17:38.043 [2024-12-02 07:45:03.404457] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:38.043 [2024-12-02 07:45:03.404521] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:48.021 00:17:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=74825: Mon Dec 2 07:45:13 2024 00:17:48.021 read: IOPS=228, BW=912KiB/s (934kB/s)(9124KiB/10004msec) 00:17:48.021 slat (usec): min=3, max=8026, avg=20.96, stdev=237.23 00:17:48.021 clat (msec): min=8, max=154, avg=70.06, stdev=24.85 00:17:48.021 lat (msec): min=8, max=154, avg=70.08, stdev=24.84 00:17:48.021 clat percentiles (msec): 00:17:48.021 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:17:48.021 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:17:48.021 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:17:48.021 | 99.00th=[ 125], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 155], 00:17:48.021 | 99.99th=[ 155] 00:17:48.021 bw ( KiB/s): min= 632, max= 1680, per=4.18%, avg=897.05, stdev=248.19, samples=19 00:17:48.021 iops : min= 158, max= 420, avg=224.26, stdev=62.05, samples=19 00:17:48.021 lat (msec) : 10=0.26%, 20=0.44%, 50=25.60%, 100=62.52%, 250=11.18% 00:17:48.021 cpu : usr=32.47%, sys=1.75%, ctx=882, majf=0, minf=10 00:17:48.021 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:17:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.021 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.021 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=74826: Mon Dec 2 07:45:13 2024 00:17:48.021 read: IOPS=216, BW=864KiB/s (885kB/s)(8668KiB/10030msec) 00:17:48.021 slat (usec): min=4, max=8027, avg=25.39, stdev=297.94 00:17:48.021 clat (msec): min=21, max=156, avg=73.92, stdev=26.40 00:17:48.021 lat (msec): min=21, max=156, avg=73.95, stdev=26.40 00:17:48.021 clat percentiles (msec): 00:17:48.021 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 48], 00:17:48.021 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:17:48.021 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 122], 00:17:48.021 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:17:48.021 | 99.99th=[ 157] 00:17:48.021 bw ( KiB/s): min= 528, max= 1584, per=4.00%, avg=860.40, stdev=251.97, samples=20 00:17:48.021 iops : min= 132, max= 396, avg=215.10, stdev=62.99, samples=20 00:17:48.021 lat (msec) : 50=23.67%, 100=59.81%, 250=16.52% 00:17:48.021 cpu : usr=31.31%, sys=1.85%, ctx=844, majf=0, minf=9 00:17:48.021 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:17:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.021 complete : 0=0.0%, 4=89.2%, 8=9.3%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.021 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=74827: Mon Dec 2 07:45:13 2024 00:17:48.021 read: IOPS=217, BW=871KiB/s (891kB/s)(8720KiB/10017msec) 00:17:48.021 slat (usec): min=3, max=8023, avg=22.89, stdev=210.07 00:17:48.021 clat (msec): min=16, max=152, avg=73.37, stdev=26.22 00:17:48.021 lat (msec): min=16, max=152, avg=73.40, stdev=26.22 00:17:48.021 clat percentiles (msec): 00:17:48.021 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:17:48.021 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 79], 00:17:48.021 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 112], 00:17:48.021 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:17:48.021 | 99.99th=[ 153] 00:17:48.021 bw ( KiB/s): min= 528, max= 1552, per=4.03%, avg=865.60, stdev=246.07, samples=20 00:17:48.021 iops : min= 132, max= 388, avg=216.40, stdev=61.52, samples=20 00:17:48.021 lat (msec) : 20=0.14%, 50=25.32%, 100=54.72%, 250=19.82% 00:17:48.021 cpu : usr=42.38%, sys=2.05%, ctx=1134, majf=0, minf=9 00:17:48.021 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:17:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.021 complete : 0=0.0%, 4=89.1%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.021 issued rwts: total=2180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=74828: Mon Dec 2 07:45:13 2024 00:17:48.021 read: IOPS=225, BW=900KiB/s (922kB/s)(9036KiB/10038msec) 00:17:48.282 slat (usec): min=4, max=8020, avg=18.03, stdev=188.45 00:17:48.282 clat (msec): min=16, max=133, avg=71.00, stdev=22.66 00:17:48.282 lat (msec): min=16, max=133, avg=71.02, stdev=22.66 00:17:48.282 clat percentiles (msec): 00:17:48.282 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:17:48.282 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:17:48.282 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:17:48.282 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:17:48.282 | 99.99th=[ 134] 00:17:48.282 bw ( KiB/s): min= 683, max= 1544, per=4.17%, avg=896.85, stdev=203.22, samples=20 00:17:48.282 iops : min= 170, max= 386, avg=224.15, stdev=50.87, samples=20 00:17:48.282 lat (msec) : 20=0.62%, 50=21.51%, 100=66.71%, 250=11.16% 00:17:48.282 cpu : usr=32.87%, sys=1.51%, ctx=931, majf=0, minf=9 00:17:48.282 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:17:48.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.282 filename0: (groupid=0, jobs=1): err= 0: pid=74829: Mon Dec 2 07:45:13 2024 00:17:48.282 read: IOPS=235, BW=940KiB/s (963kB/s)(9404KiB/10004msec) 00:17:48.282 slat (usec): min=4, max=4032, avg=21.92, stdev=146.53 00:17:48.282 clat (msec): min=5, max=135, avg=67.97, stdev=23.33 00:17:48.282 lat (msec): min=6, max=135, avg=67.99, stdev=23.33 00:17:48.282 clat percentiles (msec): 00:17:48.282 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 48], 00:17:48.282 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:17:48.282 | 70.00th=[ 78], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 107], 00:17:48.282 | 99.00th=[ 114], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:17:48.282 | 99.99th=[ 136] 00:17:48.282 bw ( KiB/s): min= 720, max= 1664, per=4.32%, avg=927.84, stdev=221.24, samples=19 00:17:48.282 iops : min= 180, max= 416, avg=231.95, stdev=55.32, samples=19 00:17:48.282 lat (msec) : 10=0.26%, 20=0.43%, 50=25.86%, 100=61.85%, 250=11.61% 00:17:48.282 cpu : usr=39.36%, sys=2.26%, ctx=1306, majf=0, minf=9 00:17:48.282 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:17:48.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.282 filename0: (groupid=0, jobs=1): err= 0: pid=74830: Mon Dec 2 07:45:13 2024 00:17:48.282 read: IOPS=218, BW=872KiB/s (893kB/s)(8764KiB/10049msec) 00:17:48.282 slat (usec): min=3, max=8023, avg=25.06, stdev=302.31 00:17:48.282 clat (msec): min=2, max=151, avg=73.14, stdev=30.64 00:17:48.282 lat (msec): min=2, max=151, avg=73.17, stdev=30.65 00:17:48.282 clat percentiles (msec): 00:17:48.282 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 36], 20.00th=[ 48], 00:17:48.282 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 81], 00:17:48.282 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 122], 00:17:48.282 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:17:48.282 | 99.99th=[ 153] 00:17:48.282 bw ( KiB/s): min= 512, max= 1904, per=4.06%, avg=872.30, stdev=332.13, samples=20 00:17:48.282 iops : min= 128, max= 476, avg=218.05, stdev=83.06, samples=20 00:17:48.282 lat (msec) : 4=3.42%, 10=1.60%, 20=0.73%, 50=17.80%, 100=55.09% 00:17:48.282 lat (msec) : 250=21.36% 00:17:48.282 cpu : usr=36.31%, sys=1.98%, ctx=1064, majf=0, minf=9 00:17:48.282 IO depths : 1=0.3%, 2=2.4%, 4=8.7%, 8=73.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:17:48.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 complete : 0=0.0%, 4=90.2%, 8=7.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.282 filename0: (groupid=0, jobs=1): err= 0: pid=74831: Mon Dec 2 07:45:13 2024 00:17:48.282 read: IOPS=234, BW=937KiB/s (959kB/s)(9368KiB/10002msec) 00:17:48.282 slat (usec): min=4, max=2067, avg=16.31, stdev=59.76 00:17:48.282 clat (usec): min=1829, max=147341, avg=68245.65, stdev=26567.87 00:17:48.282 lat (usec): min=1837, max=147357, avg=68261.96, stdev=26568.12 00:17:48.282 clat percentiles (msec): 00:17:48.282 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 46], 00:17:48.282 | 30.00th=[ 52], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 74], 00:17:48.282 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:17:48.282 | 99.00th=[ 118], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 148], 00:17:48.282 | 99.99th=[ 148] 00:17:48.282 bw ( KiB/s): min= 528, max= 1624, per=4.21%, avg=905.26, stdev=246.35, samples=19 00:17:48.282 iops : min= 132, max= 406, avg=226.32, stdev=61.59, samples=19 00:17:48.282 lat (msec) : 2=0.68%, 4=1.37%, 10=0.26%, 20=0.47%, 50=25.23% 00:17:48.282 lat (msec) : 100=56.62%, 250=15.37% 00:17:48.282 cpu : usr=43.05%, sys=2.07%, ctx=1622, majf=0, minf=9 00:17:48.282 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:17:48.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.282 filename0: (groupid=0, jobs=1): err= 0: pid=74832: Mon Dec 2 07:45:13 2024 00:17:48.282 read: IOPS=238, BW=953KiB/s (976kB/s)(9536KiB/10006msec) 00:17:48.282 slat (usec): min=4, max=8040, avg=28.60, stdev=284.27 00:17:48.282 clat (msec): min=10, max=128, avg=67.00, stdev=23.26 00:17:48.282 lat (msec): min=10, max=128, avg=67.03, stdev=23.24 00:17:48.282 clat percentiles (msec): 00:17:48.282 | 1.00th=[ 19], 5.00th=[ 28], 10.00th=[ 37], 20.00th=[ 47], 00:17:48.282 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:17:48.282 | 70.00th=[ 78], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 106], 00:17:48.282 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 126], 99.95th=[ 129], 00:17:48.282 | 99.99th=[ 129] 00:17:48.282 bw ( KiB/s): min= 712, max= 1768, per=4.38%, avg=941.63, stdev=242.14, samples=19 00:17:48.282 iops : min= 178, max= 442, avg=235.37, stdev=60.57, samples=19 00:17:48.282 lat (msec) : 20=1.43%, 50=25.13%, 100=63.21%, 250=10.23% 00:17:48.282 cpu : usr=41.05%, sys=2.08%, ctx=1424, majf=0, minf=9 00:17:48.282 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:17:48.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 issued rwts: total=2384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.282 filename1: (groupid=0, jobs=1): err= 0: pid=74833: Mon Dec 2 07:45:13 2024 00:17:48.282 read: IOPS=218, BW=874KiB/s (895kB/s)(8756KiB/10016msec) 00:17:48.282 slat (usec): min=3, max=4021, avg=16.16, stdev=85.77 00:17:48.282 clat (msec): min=19, max=156, avg=73.08, stdev=24.34 00:17:48.282 lat (msec): min=19, max=156, avg=73.10, stdev=24.34 00:17:48.282 clat percentiles (msec): 00:17:48.282 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 50], 00:17:48.282 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 78], 00:17:48.282 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:17:48.282 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:17:48.282 | 99.99th=[ 157] 00:17:48.282 bw ( KiB/s): min= 656, max= 1480, per=4.07%, avg=874.30, stdev=206.16, samples=20 00:17:48.282 iops : min= 164, max= 370, avg=218.55, stdev=51.56, samples=20 00:17:48.282 lat (msec) : 20=0.14%, 50=22.48%, 100=62.95%, 250=14.44% 00:17:48.282 cpu : usr=36.63%, sys=1.74%, ctx=1368, majf=0, minf=9 00:17:48.282 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.1%, 16=16.9%, 32=0.0%, >=64=0.0% 00:17:48.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.282 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.282 filename1: (groupid=0, jobs=1): err= 0: pid=74834: Mon Dec 2 07:45:13 2024 00:17:48.282 read: IOPS=232, BW=930KiB/s (952kB/s)(9304KiB/10009msec) 00:17:48.282 slat (usec): min=4, max=8029, avg=24.22, stdev=246.74 00:17:48.282 clat (msec): min=9, max=131, avg=68.74, stdev=23.18 00:17:48.282 lat (msec): min=9, max=131, avg=68.76, stdev=23.18 00:17:48.282 clat percentiles (msec): 00:17:48.282 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 38], 20.00th=[ 48], 00:17:48.282 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:17:48.282 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 101], 95.00th=[ 108], 00:17:48.282 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 132], 99.95th=[ 132], 00:17:48.282 | 99.99th=[ 132] 00:17:48.282 bw ( KiB/s): min= 688, max= 1584, per=4.25%, avg=913.26, stdev=213.40, samples=19 00:17:48.282 iops : min= 172, max= 396, avg=228.32, stdev=53.35, samples=19 00:17:48.282 lat (msec) : 10=0.26%, 20=0.30%, 50=25.67%, 100=63.93%, 250=9.85% 00:17:48.282 cpu : usr=33.94%, sys=1.75%, ctx=948, majf=0, minf=9 00:17:48.283 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename1: (groupid=0, jobs=1): err= 0: pid=74835: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=231, BW=925KiB/s (947kB/s)(9292KiB/10049msec) 00:17:48.283 slat (usec): min=4, max=8029, avg=19.98, stdev=203.71 00:17:48.283 clat (msec): min=2, max=139, avg=69.06, stdev=26.38 00:17:48.283 lat (msec): min=2, max=139, avg=69.08, stdev=26.38 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 35], 20.00th=[ 48], 00:17:48.283 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:17:48.283 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 109], 00:17:48.283 | 99.00th=[ 116], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:17:48.283 | 99.99th=[ 140] 00:17:48.283 bw ( KiB/s): min= 704, max= 1667, per=4.29%, avg=922.85, stdev=248.73, samples=20 00:17:48.283 iops : min= 176, max= 416, avg=230.65, stdev=62.08, samples=20 00:17:48.283 lat (msec) : 4=0.99%, 10=3.06%, 20=0.82%, 50=19.93%, 100=61.26% 00:17:48.283 lat (msec) : 250=13.95% 00:17:48.283 cpu : usr=41.76%, sys=2.11%, ctx=1227, majf=0, minf=9 00:17:48.283 IO depths : 1=0.3%, 2=0.9%, 4=2.5%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename1: (groupid=0, jobs=1): err= 0: pid=74836: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=228, BW=915KiB/s (937kB/s)(9176KiB/10027msec) 00:17:48.283 slat (usec): min=4, max=8027, avg=40.96, stdev=437.74 00:17:48.283 clat (msec): min=21, max=151, avg=69.77, stdev=22.92 00:17:48.283 lat (msec): min=21, max=151, avg=69.81, stdev=22.92 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:17:48.283 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:17:48.283 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 107], 00:17:48.283 | 99.00th=[ 115], 99.50th=[ 123], 99.90th=[ 148], 99.95th=[ 153], 00:17:48.283 | 99.99th=[ 153] 00:17:48.283 bw ( KiB/s): min= 640, max= 1552, per=4.24%, avg=911.20, stdev=207.07, samples=20 00:17:48.283 iops : min= 160, max= 388, avg=227.80, stdev=51.77, samples=20 00:17:48.283 lat (msec) : 50=25.15%, 100=62.82%, 250=12.03% 00:17:48.283 cpu : usr=37.04%, sys=1.91%, ctx=1180, majf=0, minf=9 00:17:48.283 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename1: (groupid=0, jobs=1): err= 0: pid=74837: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=219, BW=877KiB/s (898kB/s)(8796KiB/10028msec) 00:17:48.283 slat (nsec): min=4485, max=36376, avg=13591.20, stdev=4287.64 00:17:48.283 clat (msec): min=22, max=132, avg=72.87, stdev=23.23 00:17:48.283 lat (msec): min=22, max=132, avg=72.88, stdev=23.23 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 50], 00:17:48.283 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:17:48.283 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:17:48.283 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 133], 00:17:48.283 | 99.99th=[ 133] 00:17:48.283 bw ( KiB/s): min= 656, max= 1504, per=4.07%, avg=873.10, stdev=197.55, samples=20 00:17:48.283 iops : min= 164, max= 376, avg=218.25, stdev=49.39, samples=20 00:17:48.283 lat (msec) : 50=21.06%, 100=65.44%, 250=13.51% 00:17:48.283 cpu : usr=31.56%, sys=1.61%, ctx=842, majf=0, minf=9 00:17:48.283 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=79.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename1: (groupid=0, jobs=1): err= 0: pid=74838: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=209, BW=837KiB/s (858kB/s)(8388KiB/10016msec) 00:17:48.283 slat (usec): min=4, max=8033, avg=41.24, stdev=425.41 00:17:48.283 clat (msec): min=17, max=152, avg=76.19, stdev=27.75 00:17:48.283 lat (msec): min=17, max=152, avg=76.23, stdev=27.75 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 48], 00:17:48.283 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 84], 00:17:48.283 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 111], 95.00th=[ 120], 00:17:48.283 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 153], 00:17:48.283 | 99.99th=[ 153] 00:17:48.283 bw ( KiB/s): min= 512, max= 1560, per=3.87%, avg=832.40, stdev=259.87, samples=20 00:17:48.283 iops : min= 128, max= 390, avg=208.10, stdev=64.97, samples=20 00:17:48.283 lat (msec) : 20=0.33%, 50=22.37%, 100=56.41%, 250=20.89% 00:17:48.283 cpu : usr=35.49%, sys=1.97%, ctx=1246, majf=0, minf=9 00:17:48.283 IO depths : 1=0.1%, 2=2.4%, 4=9.4%, 8=73.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=90.0%, 8=7.9%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename1: (groupid=0, jobs=1): err= 0: pid=74839: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=220, BW=882KiB/s (904kB/s)(8856KiB/10036msec) 00:17:48.283 slat (usec): min=4, max=8027, avg=24.86, stdev=294.86 00:17:48.283 clat (msec): min=16, max=135, avg=72.36, stdev=24.49 00:17:48.283 lat (msec): min=16, max=135, avg=72.39, stdev=24.48 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 48], 00:17:48.283 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:17:48.283 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:17:48.283 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:17:48.283 | 99.99th=[ 136] 00:17:48.283 bw ( KiB/s): min= 528, max= 1504, per=4.09%, avg=878.95, stdev=220.22, samples=20 00:17:48.283 iops : min= 132, max= 376, avg=219.65, stdev=55.10, samples=20 00:17:48.283 lat (msec) : 20=0.72%, 50=22.18%, 100=62.92%, 250=14.18% 00:17:48.283 cpu : usr=31.70%, sys=1.49%, ctx=850, majf=0, minf=9 00:17:48.283 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=78.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename1: (groupid=0, jobs=1): err= 0: pid=74840: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=224, BW=897KiB/s (918kB/s)(9004KiB/10039msec) 00:17:48.283 slat (usec): min=6, max=8019, avg=22.91, stdev=221.52 00:17:48.283 clat (msec): min=16, max=131, avg=71.22, stdev=22.65 00:17:48.283 lat (msec): min=16, max=132, avg=71.24, stdev=22.66 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 42], 20.00th=[ 48], 00:17:48.283 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:17:48.283 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:17:48.283 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 132], 00:17:48.283 | 99.99th=[ 132] 00:17:48.283 bw ( KiB/s): min= 652, max= 1448, per=4.16%, avg=893.70, stdev=193.59, samples=20 00:17:48.283 iops : min= 163, max= 362, avg=223.40, stdev=48.42, samples=20 00:17:48.283 lat (msec) : 20=0.62%, 50=20.52%, 100=67.13%, 250=11.73% 00:17:48.283 cpu : usr=40.35%, sys=2.14%, ctx=1272, majf=0, minf=9 00:17:48.283 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename2: (groupid=0, jobs=1): err= 0: pid=74841: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=223, BW=892KiB/s (914kB/s)(8924KiB/10002msec) 00:17:48.283 slat (usec): min=4, max=8030, avg=22.11, stdev=213.65 00:17:48.283 clat (msec): min=3, max=143, avg=71.60, stdev=27.09 00:17:48.283 lat (msec): min=3, max=143, avg=71.62, stdev=27.10 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 37], 20.00th=[ 48], 00:17:48.283 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 78], 00:17:48.283 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 112], 00:17:48.283 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 144], 00:17:48.283 | 99.99th=[ 144] 00:17:48.283 bw ( KiB/s): min= 528, max= 1736, per=4.08%, avg=876.21, stdev=283.19, samples=19 00:17:48.283 iops : min= 132, max= 434, avg=219.05, stdev=70.80, samples=19 00:17:48.283 lat (msec) : 4=0.13%, 20=1.30%, 50=25.32%, 100=55.76%, 250=17.48% 00:17:48.283 cpu : usr=42.72%, sys=2.33%, ctx=1254, majf=0, minf=9 00:17:48.283 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:17:48.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.283 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.283 filename2: (groupid=0, jobs=1): err= 0: pid=74842: Mon Dec 2 07:45:13 2024 00:17:48.283 read: IOPS=228, BW=913KiB/s (935kB/s)(9172KiB/10046msec) 00:17:48.283 slat (usec): min=3, max=8515, avg=24.75, stdev=307.26 00:17:48.283 clat (msec): min=2, max=155, avg=69.94, stdev=27.45 00:17:48.283 lat (msec): min=2, max=155, avg=69.97, stdev=27.45 00:17:48.283 clat percentiles (msec): 00:17:48.283 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 48], 00:17:48.283 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:17:48.284 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:17:48.284 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:17:48.284 | 99.99th=[ 157] 00:17:48.284 bw ( KiB/s): min= 542, max= 1776, per=4.24%, avg=911.55, stdev=282.85, samples=20 00:17:48.284 iops : min= 135, max= 444, avg=227.85, stdev=70.75, samples=20 00:17:48.284 lat (msec) : 4=1.40%, 10=2.70%, 20=0.70%, 50=19.58%, 100=61.54% 00:17:48.284 lat (msec) : 250=14.09% 00:17:48.284 cpu : usr=32.76%, sys=1.71%, ctx=936, majf=0, minf=9 00:17:48.284 IO depths : 1=0.2%, 2=0.9%, 4=3.1%, 8=79.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:17:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.284 filename2: (groupid=0, jobs=1): err= 0: pid=74843: Mon Dec 2 07:45:13 2024 00:17:48.284 read: IOPS=226, BW=907KiB/s (929kB/s)(9100KiB/10030msec) 00:17:48.284 slat (usec): min=3, max=8024, avg=25.36, stdev=237.52 00:17:48.284 clat (msec): min=20, max=135, avg=70.40, stdev=23.81 00:17:48.284 lat (msec): min=20, max=135, avg=70.43, stdev=23.81 00:17:48.284 clat percentiles (msec): 00:17:48.284 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:17:48.284 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:17:48.284 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:17:48.284 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 136], 00:17:48.284 | 99.99th=[ 136] 00:17:48.284 bw ( KiB/s): min= 640, max= 1496, per=4.21%, avg=903.45, stdev=222.68, samples=20 00:17:48.284 iops : min= 160, max= 374, avg=225.85, stdev=55.66, samples=20 00:17:48.284 lat (msec) : 50=24.44%, 100=61.27%, 250=14.29% 00:17:48.284 cpu : usr=41.45%, sys=2.18%, ctx=1198, majf=0, minf=9 00:17:48.284 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:17:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 complete : 0=0.0%, 4=88.3%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.284 filename2: (groupid=0, jobs=1): err= 0: pid=74844: Mon Dec 2 07:45:13 2024 00:17:48.284 read: IOPS=206, BW=825KiB/s (845kB/s)(8284KiB/10036msec) 00:17:48.284 slat (usec): min=5, max=4029, avg=16.46, stdev=88.38 00:17:48.284 clat (msec): min=21, max=156, avg=77.40, stdev=27.68 00:17:48.284 lat (msec): min=21, max=156, avg=77.42, stdev=27.68 00:17:48.284 clat percentiles (msec): 00:17:48.284 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 50], 00:17:48.284 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 85], 00:17:48.284 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 121], 00:17:48.284 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:17:48.284 | 99.99th=[ 157] 00:17:48.284 bw ( KiB/s): min= 512, max= 1536, per=3.82%, avg=821.85, stdev=255.45, samples=20 00:17:48.284 iops : min= 128, max= 384, avg=205.45, stdev=63.87, samples=20 00:17:48.284 lat (msec) : 50=21.15%, 100=56.11%, 250=22.74% 00:17:48.284 cpu : usr=38.95%, sys=1.62%, ctx=1109, majf=0, minf=9 00:17:48.284 IO depths : 1=0.1%, 2=2.7%, 4=11.1%, 8=71.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:17:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 complete : 0=0.0%, 4=90.5%, 8=7.1%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 issued rwts: total=2071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.284 filename2: (groupid=0, jobs=1): err= 0: pid=74845: Mon Dec 2 07:45:13 2024 00:17:48.284 read: IOPS=224, BW=900KiB/s (921kB/s)(9012KiB/10017msec) 00:17:48.284 slat (usec): min=5, max=8023, avg=18.51, stdev=168.79 00:17:48.284 clat (msec): min=19, max=139, avg=71.02, stdev=23.50 00:17:48.284 lat (msec): min=19, max=139, avg=71.04, stdev=23.50 00:17:48.284 clat percentiles (msec): 00:17:48.284 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 48], 00:17:48.284 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:17:48.284 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 108], 00:17:48.284 | 99.00th=[ 117], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 140], 00:17:48.284 | 99.99th=[ 140] 00:17:48.284 bw ( KiB/s): min= 656, max= 1496, per=4.18%, avg=897.20, stdev=212.79, samples=20 00:17:48.284 iops : min= 164, max= 374, avg=224.30, stdev=53.20, samples=20 00:17:48.284 lat (msec) : 20=0.13%, 50=23.79%, 100=62.36%, 250=13.72% 00:17:48.284 cpu : usr=34.64%, sys=2.00%, ctx=1035, majf=0, minf=9 00:17:48.284 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:17:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.284 filename2: (groupid=0, jobs=1): err= 0: pid=74846: Mon Dec 2 07:45:13 2024 00:17:48.284 read: IOPS=232, BW=929KiB/s (951kB/s)(9304KiB/10017msec) 00:17:48.284 slat (usec): min=3, max=8025, avg=29.47, stdev=294.26 00:17:48.284 clat (msec): min=18, max=140, avg=68.73, stdev=23.41 00:17:48.284 lat (msec): min=18, max=140, avg=68.76, stdev=23.40 00:17:48.284 clat percentiles (msec): 00:17:48.284 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 48], 00:17:48.284 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 73], 00:17:48.284 | 70.00th=[ 80], 80.00th=[ 92], 90.00th=[ 104], 95.00th=[ 108], 00:17:48.284 | 99.00th=[ 114], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:17:48.284 | 99.99th=[ 140] 00:17:48.284 bw ( KiB/s): min= 712, max= 1608, per=4.31%, avg=926.40, stdev=218.37, samples=20 00:17:48.284 iops : min= 178, max= 402, avg=231.60, stdev=54.59, samples=20 00:17:48.284 lat (msec) : 20=0.52%, 50=24.29%, 100=62.21%, 250=12.98% 00:17:48.284 cpu : usr=42.39%, sys=2.26%, ctx=1342, majf=0, minf=9 00:17:48.284 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:17:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.284 filename2: (groupid=0, jobs=1): err= 0: pid=74847: Mon Dec 2 07:45:13 2024 00:17:48.284 read: IOPS=219, BW=878KiB/s (899kB/s)(8812KiB/10033msec) 00:17:48.284 slat (usec): min=4, max=4023, avg=16.31, stdev=85.68 00:17:48.284 clat (msec): min=19, max=144, avg=72.75, stdev=23.67 00:17:48.284 lat (msec): min=19, max=145, avg=72.77, stdev=23.67 00:17:48.284 clat percentiles (msec): 00:17:48.284 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 51], 00:17:48.284 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:17:48.284 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:17:48.284 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:17:48.284 | 99.99th=[ 146] 00:17:48.284 bw ( KiB/s): min= 640, max= 1456, per=4.07%, avg=874.55, stdev=196.23, samples=20 00:17:48.284 iops : min= 160, max= 364, avg=218.60, stdev=49.08, samples=20 00:17:48.284 lat (msec) : 20=0.09%, 50=19.79%, 100=65.46%, 250=14.66% 00:17:48.284 cpu : usr=38.24%, sys=1.84%, ctx=1129, majf=0, minf=9 00:17:48.284 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=80.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:17:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.284 filename2: (groupid=0, jobs=1): err= 0: pid=74848: Mon Dec 2 07:45:13 2024 00:17:48.284 read: IOPS=223, BW=896KiB/s (917kB/s)(8992KiB/10036msec) 00:17:48.284 slat (usec): min=5, max=8025, avg=17.97, stdev=169.02 00:17:48.284 clat (msec): min=21, max=132, avg=71.31, stdev=23.31 00:17:48.284 lat (msec): min=21, max=132, avg=71.33, stdev=23.31 00:17:48.284 clat percentiles (msec): 00:17:48.284 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 48], 00:17:48.284 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:17:48.284 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:17:48.284 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:17:48.284 | 99.99th=[ 133] 00:17:48.284 bw ( KiB/s): min= 641, max= 1528, per=4.15%, avg=892.65, stdev=215.11, samples=20 00:17:48.284 iops : min= 160, max= 382, avg=223.15, stdev=53.79, samples=20 00:17:48.284 lat (msec) : 50=23.00%, 100=64.90%, 250=12.10% 00:17:48.284 cpu : usr=36.34%, sys=1.65%, ctx=1137, majf=0, minf=9 00:17:48.284 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:17:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.284 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:17:48.284 00:17:48.284 Run status group 0 (all jobs): 00:17:48.284 READ: bw=21.0MiB/s (22.0MB/s), 825KiB/s-953KiB/s (845kB/s-976kB/s), io=211MiB (221MB), run=10002-10049msec 00:17:48.284 07:45:13 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:17:48.284 07:45:13 -- target/dif.sh@43 -- # local sub 00:17:48.284 07:45:13 -- target/dif.sh@45 -- # for sub in "$@" 00:17:48.284 07:45:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:48.284 07:45:13 -- target/dif.sh@36 -- # local sub_id=0 00:17:48.284 07:45:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:48.284 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.284 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.284 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.284 07:45:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:48.284 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.284 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.284 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.284 07:45:13 -- target/dif.sh@45 -- # for sub in "$@" 00:17:48.284 07:45:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:17:48.284 07:45:13 -- target/dif.sh@36 -- # local sub_id=1 00:17:48.284 07:45:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.284 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.284 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@45 -- # for sub in "$@" 00:17:48.285 07:45:13 -- target/dif.sh@46 -- # destroy_subsystem 2 00:17:48.285 07:45:13 -- target/dif.sh@36 -- # local sub_id=2 00:17:48.285 07:45:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@115 -- # NULL_DIF=1 00:17:48.285 07:45:13 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:17:48.285 07:45:13 -- target/dif.sh@115 -- # numjobs=2 00:17:48.285 07:45:13 -- target/dif.sh@115 -- # iodepth=8 00:17:48.285 07:45:13 -- target/dif.sh@115 -- # runtime=5 00:17:48.285 07:45:13 -- target/dif.sh@115 -- # files=1 00:17:48.285 07:45:13 -- target/dif.sh@117 -- # create_subsystems 0 1 00:17:48.285 07:45:13 -- target/dif.sh@28 -- # local sub 00:17:48.285 07:45:13 -- target/dif.sh@30 -- # for sub in "$@" 00:17:48.285 07:45:13 -- target/dif.sh@31 -- # create_subsystem 0 00:17:48.285 07:45:13 -- target/dif.sh@18 -- # local sub_id=0 00:17:48.285 07:45:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 bdev_null0 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 [2024-12-02 07:45:13.881897] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@30 -- # for sub in "$@" 00:17:48.285 07:45:13 -- target/dif.sh@31 -- # create_subsystem 1 00:17:48.285 07:45:13 -- target/dif.sh@18 -- # local sub_id=1 00:17:48.285 07:45:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.285 bdev_null1 00:17:48.285 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.285 07:45:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:17:48.285 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.285 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.544 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.544 07:45:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:17:48.544 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.544 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.544 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.544 07:45:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.544 07:45:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.544 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.544 07:45:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.544 07:45:13 -- target/dif.sh@118 -- # fio /dev/fd/62 00:17:48.544 07:45:13 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:17:48.544 07:45:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:17:48.544 07:45:13 -- nvmf/common.sh@520 -- # config=() 00:17:48.544 07:45:13 -- nvmf/common.sh@520 -- # local subsystem config 00:17:48.544 07:45:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.544 07:45:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.544 { 00:17:48.544 "params": { 00:17:48.544 "name": "Nvme$subsystem", 00:17:48.544 "trtype": "$TEST_TRANSPORT", 00:17:48.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.544 "adrfam": "ipv4", 00:17:48.544 "trsvcid": "$NVMF_PORT", 00:17:48.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.544 "hdgst": ${hdgst:-false}, 00:17:48.544 "ddgst": ${ddgst:-false} 00:17:48.544 }, 00:17:48.544 "method": "bdev_nvme_attach_controller" 00:17:48.544 } 00:17:48.544 EOF 00:17:48.544 )") 00:17:48.544 07:45:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:48.544 07:45:13 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:48.544 07:45:13 -- target/dif.sh@82 -- # gen_fio_conf 00:17:48.544 07:45:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:48.544 07:45:13 -- target/dif.sh@54 -- # local file 00:17:48.544 07:45:13 -- target/dif.sh@56 -- # cat 00:17:48.544 07:45:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:48.544 07:45:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:48.544 07:45:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.544 07:45:13 -- nvmf/common.sh@542 -- # cat 00:17:48.544 07:45:13 -- common/autotest_common.sh@1330 -- # shift 00:17:48.544 07:45:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:48.544 07:45:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:48.544 07:45:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:48.544 07:45:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:48.544 07:45:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:48.544 { 00:17:48.544 "params": { 00:17:48.544 "name": "Nvme$subsystem", 00:17:48.544 "trtype": "$TEST_TRANSPORT", 00:17:48.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.544 "adrfam": "ipv4", 00:17:48.544 "trsvcid": "$NVMF_PORT", 00:17:48.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.544 "hdgst": ${hdgst:-false}, 00:17:48.544 "ddgst": ${ddgst:-false} 00:17:48.544 }, 00:17:48.544 "method": "bdev_nvme_attach_controller" 00:17:48.544 } 00:17:48.544 EOF 00:17:48.544 )") 00:17:48.544 07:45:13 -- target/dif.sh@72 -- # (( file <= files )) 00:17:48.544 07:45:13 -- target/dif.sh@73 -- # cat 00:17:48.544 07:45:13 -- nvmf/common.sh@542 -- # cat 00:17:48.544 07:45:13 -- target/dif.sh@72 -- # (( file++ )) 00:17:48.544 07:45:13 -- target/dif.sh@72 -- # (( file <= files )) 00:17:48.544 07:45:13 -- nvmf/common.sh@544 -- # jq . 00:17:48.544 07:45:13 -- nvmf/common.sh@545 -- # IFS=, 00:17:48.544 07:45:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:48.544 "params": { 00:17:48.544 "name": "Nvme0", 00:17:48.544 "trtype": "tcp", 00:17:48.544 "traddr": "10.0.0.2", 00:17:48.544 "adrfam": "ipv4", 00:17:48.544 "trsvcid": "4420", 00:17:48.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:48.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:48.544 "hdgst": false, 00:17:48.544 "ddgst": false 00:17:48.544 }, 00:17:48.544 "method": "bdev_nvme_attach_controller" 00:17:48.544 },{ 00:17:48.544 "params": { 00:17:48.544 "name": "Nvme1", 00:17:48.544 "trtype": "tcp", 00:17:48.544 "traddr": "10.0.0.2", 00:17:48.544 "adrfam": "ipv4", 00:17:48.544 "trsvcid": "4420", 00:17:48.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.544 "hdgst": false, 00:17:48.544 "ddgst": false 00:17:48.544 }, 00:17:48.544 "method": "bdev_nvme_attach_controller" 00:17:48.544 }' 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:48.544 07:45:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:48.544 07:45:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:48.544 07:45:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:48.544 07:45:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:48.544 07:45:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:48.544 07:45:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:48.544 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:17:48.544 ... 00:17:48.544 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:17:48.544 ... 00:17:48.544 fio-3.35 00:17:48.544 Starting 4 threads 00:17:49.111 [2024-12-02 07:45:14.494636] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:49.111 [2024-12-02 07:45:14.494957] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:54.384 00:17:54.384 filename0: (groupid=0, jobs=1): err= 0: pid=74997: Mon Dec 2 07:45:19 2024 00:17:54.384 read: IOPS=2421, BW=18.9MiB/s (19.8MB/s)(94.6MiB/5002msec) 00:17:54.384 slat (nsec): min=6732, max=63533, avg=14153.66, stdev=4450.10 00:17:54.384 clat (usec): min=1322, max=5865, avg=3270.17, stdev=1007.74 00:17:54.384 lat (usec): min=1335, max=5879, avg=3284.33, stdev=1007.19 00:17:54.384 clat percentiles (usec): 00:17:54.384 | 1.00th=[ 1762], 5.00th=[ 1827], 10.00th=[ 1893], 20.00th=[ 2311], 00:17:54.384 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 3982], 00:17:54.384 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:17:54.384 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5669], 99.95th=[ 5800], 00:17:54.384 | 99.99th=[ 5866] 00:17:54.384 bw ( KiB/s): min=17712, max=19840, per=26.64%, avg=19372.44, stdev=648.14, samples=9 00:17:54.384 iops : min= 2214, max= 2480, avg=2421.56, stdev=81.02, samples=9 00:17:54.384 lat (msec) : 2=14.79%, 4=45.45%, 10=39.76% 00:17:54.384 cpu : usr=91.94%, sys=7.06%, ctx=20, majf=0, minf=9 00:17:54.384 IO depths : 1=0.1%, 2=0.5%, 4=63.4%, 8=36.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 issued rwts: total=12113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.384 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:54.384 filename0: (groupid=0, jobs=1): err= 0: pid=74998: Mon Dec 2 07:45:19 2024 00:17:54.384 read: IOPS=2438, BW=19.1MiB/s (20.0MB/s)(95.3MiB/5002msec) 00:17:54.384 slat (nsec): min=6569, max=53823, avg=9878.55, stdev=4065.97 00:17:54.384 clat (usec): min=1356, max=5534, avg=3256.04, stdev=1010.28 00:17:54.384 lat (usec): min=1363, max=5556, avg=3265.92, stdev=1009.86 00:17:54.384 clat percentiles (usec): 00:17:54.384 | 1.00th=[ 1729], 5.00th=[ 1811], 10.00th=[ 1860], 20.00th=[ 2311], 00:17:54.384 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 3949], 00:17:54.384 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:17:54.384 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 4948], 99.95th=[ 5014], 00:17:54.384 | 99.99th=[ 5080] 00:17:54.384 bw ( KiB/s): min=19168, max=19792, per=26.86%, avg=19533.56, stdev=211.37, samples=9 00:17:54.384 iops : min= 2396, max= 2474, avg=2441.67, stdev=26.42, samples=9 00:17:54.384 lat (msec) : 2=15.38%, 4=45.91%, 10=38.71% 00:17:54.384 cpu : usr=91.54%, sys=7.48%, ctx=9, majf=0, minf=0 00:17:54.384 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 issued rwts: total=12199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.384 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:54.384 filename1: (groupid=0, jobs=1): err= 0: pid=74999: Mon Dec 2 07:45:19 2024 00:17:54.384 read: IOPS=1789, BW=14.0MiB/s (14.7MB/s)(69.9MiB/5001msec) 00:17:54.384 slat (usec): min=6, max=158, avg=10.37, stdev= 5.75 00:17:54.384 clat (usec): min=1583, max=5920, avg=4426.12, stdev=338.16 00:17:54.384 lat (usec): min=1591, max=5936, avg=4436.48, stdev=337.90 00:17:54.384 clat percentiles (usec): 00:17:54.384 | 1.00th=[ 2704], 5.00th=[ 4228], 10.00th=[ 4293], 20.00th=[ 4359], 00:17:54.384 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4424], 00:17:54.384 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:17:54.384 | 99.00th=[ 5014], 99.50th=[ 5407], 99.90th=[ 5866], 99.95th=[ 5866], 00:17:54.384 | 99.99th=[ 5932] 00:17:54.384 bw ( KiB/s): min=13952, max=15328, per=19.74%, avg=14351.44, stdev=398.91, samples=9 00:17:54.384 iops : min= 1744, max= 1916, avg=1793.89, stdev=49.89, samples=9 00:17:54.384 lat (msec) : 2=0.37%, 4=2.48%, 10=97.15% 00:17:54.384 cpu : usr=91.60%, sys=7.24%, ctx=73, majf=0, minf=0 00:17:54.384 IO depths : 1=0.1%, 2=24.2%, 4=50.5%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 issued rwts: total=8950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.384 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:54.384 filename1: (groupid=0, jobs=1): err= 0: pid=75000: Mon Dec 2 07:45:19 2024 00:17:54.384 read: IOPS=2439, BW=19.1MiB/s (20.0MB/s)(95.3MiB/5001msec) 00:17:54.384 slat (nsec): min=6958, max=59779, avg=14009.47, stdev=4240.01 00:17:54.384 clat (usec): min=1021, max=6779, avg=3247.68, stdev=996.38 00:17:54.384 lat (usec): min=1028, max=6805, avg=3261.69, stdev=996.17 00:17:54.384 clat percentiles (usec): 00:17:54.384 | 1.00th=[ 1762], 5.00th=[ 1827], 10.00th=[ 1893], 20.00th=[ 2311], 00:17:54.384 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 3949], 00:17:54.384 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:17:54.384 | 99.00th=[ 4817], 99.50th=[ 4817], 99.90th=[ 4948], 99.95th=[ 5014], 00:17:54.384 | 99.99th=[ 5080] 00:17:54.384 bw ( KiB/s): min=19062, max=19840, per=26.85%, avg=19522.44, stdev=249.36, samples=9 00:17:54.384 iops : min= 2382, max= 2480, avg=2440.22, stdev=31.34, samples=9 00:17:54.384 lat (msec) : 2=14.91%, 4=46.39%, 10=38.70% 00:17:54.384 cpu : usr=91.66%, sys=7.30%, ctx=7, majf=0, minf=9 00:17:54.384 IO depths : 1=0.1%, 2=0.1%, 4=63.7%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.384 issued rwts: total=12199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.384 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:54.384 00:17:54.384 Run status group 0 (all jobs): 00:17:54.384 READ: bw=71.0MiB/s (74.5MB/s), 14.0MiB/s-19.1MiB/s (14.7MB/s-20.0MB/s), io=355MiB (372MB), run=5001-5002msec 00:17:54.384 07:45:19 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:17:54.384 07:45:19 -- target/dif.sh@43 -- # local sub 00:17:54.384 07:45:19 -- target/dif.sh@45 -- # for sub in "$@" 00:17:54.384 07:45:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:54.384 07:45:19 -- target/dif.sh@36 -- # local sub_id=0 00:17:54.384 07:45:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:54.384 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.384 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.384 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.384 07:45:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:54.384 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.384 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.384 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.384 07:45:19 -- target/dif.sh@45 -- # for sub in "$@" 00:17:54.384 07:45:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:17:54.384 07:45:19 -- target/dif.sh@36 -- # local sub_id=1 00:17:54.384 07:45:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.384 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.384 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.384 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.384 07:45:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:17:54.384 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.384 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.384 ************************************ 00:17:54.384 END TEST fio_dif_rand_params 00:17:54.384 ************************************ 00:17:54.384 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.384 00:17:54.384 real 0m23.099s 00:17:54.384 user 2m4.087s 00:17:54.384 sys 0m7.813s 00:17:54.384 07:45:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:54.385 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 07:45:19 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:17:54.385 07:45:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:54.385 07:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:54.385 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 ************************************ 00:17:54.385 START TEST fio_dif_digest 00:17:54.385 ************************************ 00:17:54.385 07:45:19 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:17:54.385 07:45:19 -- target/dif.sh@123 -- # local NULL_DIF 00:17:54.385 07:45:19 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:17:54.385 07:45:19 -- target/dif.sh@125 -- # local hdgst ddgst 00:17:54.385 07:45:19 -- target/dif.sh@127 -- # NULL_DIF=3 00:17:54.385 07:45:19 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:17:54.385 07:45:19 -- target/dif.sh@127 -- # numjobs=3 00:17:54.385 07:45:19 -- target/dif.sh@127 -- # iodepth=3 00:17:54.385 07:45:19 -- target/dif.sh@127 -- # runtime=10 00:17:54.385 07:45:19 -- target/dif.sh@128 -- # hdgst=true 00:17:54.385 07:45:19 -- target/dif.sh@128 -- # ddgst=true 00:17:54.385 07:45:19 -- target/dif.sh@130 -- # create_subsystems 0 00:17:54.385 07:45:19 -- target/dif.sh@28 -- # local sub 00:17:54.385 07:45:19 -- target/dif.sh@30 -- # for sub in "$@" 00:17:54.385 07:45:19 -- target/dif.sh@31 -- # create_subsystem 0 00:17:54.385 07:45:19 -- target/dif.sh@18 -- # local sub_id=0 00:17:54.385 07:45:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:17:54.385 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.385 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 bdev_null0 00:17:54.385 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.385 07:45:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:54.385 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.385 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.385 07:45:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:54.385 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.385 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.385 07:45:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:54.385 07:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.385 07:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 [2024-12-02 07:45:19.913406] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.385 07:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.385 07:45:19 -- target/dif.sh@131 -- # fio /dev/fd/62 00:17:54.385 07:45:19 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:17:54.385 07:45:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:17:54.385 07:45:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:54.385 07:45:19 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:54.385 07:45:19 -- nvmf/common.sh@520 -- # config=() 00:17:54.385 07:45:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:54.385 07:45:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:54.385 07:45:19 -- nvmf/common.sh@520 -- # local subsystem config 00:17:54.385 07:45:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:54.385 07:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:54.385 07:45:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:54.385 07:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:54.385 { 00:17:54.385 "params": { 00:17:54.385 "name": "Nvme$subsystem", 00:17:54.385 "trtype": "$TEST_TRANSPORT", 00:17:54.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.385 "adrfam": "ipv4", 00:17:54.385 "trsvcid": "$NVMF_PORT", 00:17:54.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.385 "hdgst": ${hdgst:-false}, 00:17:54.385 "ddgst": ${ddgst:-false} 00:17:54.385 }, 00:17:54.385 "method": "bdev_nvme_attach_controller" 00:17:54.385 } 00:17:54.385 EOF 00:17:54.385 )") 00:17:54.385 07:45:19 -- common/autotest_common.sh@1330 -- # shift 00:17:54.385 07:45:19 -- target/dif.sh@82 -- # gen_fio_conf 00:17:54.385 07:45:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:54.385 07:45:19 -- target/dif.sh@54 -- # local file 00:17:54.385 07:45:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.385 07:45:19 -- target/dif.sh@56 -- # cat 00:17:54.385 07:45:19 -- nvmf/common.sh@542 -- # cat 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:54.385 07:45:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:54.385 07:45:19 -- target/dif.sh@72 -- # (( file <= files )) 00:17:54.385 07:45:19 -- nvmf/common.sh@544 -- # jq . 00:17:54.385 07:45:19 -- nvmf/common.sh@545 -- # IFS=, 00:17:54.385 07:45:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:54.385 "params": { 00:17:54.385 "name": "Nvme0", 00:17:54.385 "trtype": "tcp", 00:17:54.385 "traddr": "10.0.0.2", 00:17:54.385 "adrfam": "ipv4", 00:17:54.385 "trsvcid": "4420", 00:17:54.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:54.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:54.385 "hdgst": true, 00:17:54.385 "ddgst": true 00:17:54.385 }, 00:17:54.385 "method": "bdev_nvme_attach_controller" 00:17:54.385 }' 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:54.385 07:45:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:54.385 07:45:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:54.385 07:45:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:54.385 07:45:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:54.385 07:45:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:54.385 07:45:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:54.644 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:17:54.644 ... 00:17:54.644 fio-3.35 00:17:54.644 Starting 3 threads 00:17:54.903 [2024-12-02 07:45:20.464332] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:54.903 [2024-12-02 07:45:20.464400] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:07.145 00:18:07.145 filename0: (groupid=0, jobs=1): err= 0: pid=75110: Mon Dec 2 07:45:30 2024 00:18:07.145 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(317MiB/10005msec) 00:18:07.145 slat (nsec): min=6844, max=53201, avg=14796.20, stdev=4678.94 00:18:07.145 clat (usec): min=11319, max=14694, avg=11821.62, stdev=408.80 00:18:07.145 lat (usec): min=11330, max=14707, avg=11836.41, stdev=409.05 00:18:07.145 clat percentiles (usec): 00:18:07.145 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:18:07.145 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:18:07.145 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:18:07.145 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14615], 99.95th=[14746], 00:18:07.145 | 99.99th=[14746] 00:18:07.145 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32373.84, stdev=463.63, samples=19 00:18:07.145 iops : min= 246, max= 258, avg=252.89, stdev= 3.63, samples=19 00:18:07.145 lat (msec) : 20=100.00% 00:18:07.145 cpu : usr=92.29%, sys=7.14%, ctx=16, majf=0, minf=9 00:18:07.145 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.145 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.145 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:07.145 filename0: (groupid=0, jobs=1): err= 0: pid=75111: Mon Dec 2 07:45:30 2024 00:18:07.145 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(317MiB/10004msec) 00:18:07.145 slat (nsec): min=6736, max=51440, avg=14814.04, stdev=4703.33 00:18:07.145 clat (usec): min=11304, max=14691, avg=11819.71, stdev=402.76 00:18:07.145 lat (usec): min=11319, max=14706, avg=11834.53, stdev=402.94 00:18:07.145 clat percentiles (usec): 00:18:07.145 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:18:07.145 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:18:07.145 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:18:07.145 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14615], 99.95th=[14615], 00:18:07.145 | 99.99th=[14746] 00:18:07.145 bw ( KiB/s): min=31488, max=33024, per=33.33%, avg=32380.63, stdev=461.74, samples=19 00:18:07.145 iops : min= 246, max= 258, avg=252.95, stdev= 3.61, samples=19 00:18:07.145 lat (msec) : 20=100.00% 00:18:07.145 cpu : usr=92.16%, sys=7.28%, ctx=8, majf=0, minf=9 00:18:07.145 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.145 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.145 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:07.145 filename0: (groupid=0, jobs=1): err= 0: pid=75112: Mon Dec 2 07:45:30 2024 00:18:07.145 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(317MiB/10007msec) 00:18:07.145 slat (nsec): min=6679, max=53764, avg=13441.39, stdev=4989.56 00:18:07.145 clat (usec): min=11357, max=15858, avg=11827.01, stdev=424.41 00:18:07.145 lat (usec): min=11364, max=15884, avg=11840.45, stdev=424.48 00:18:07.145 clat percentiles (usec): 00:18:07.145 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:18:07.145 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:18:07.145 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:18:07.145 | 99.00th=[13304], 99.50th=[13435], 99.90th=[15795], 99.95th=[15795], 00:18:07.145 | 99.99th=[15795] 00:18:07.145 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32377.26, stdev=528.57, samples=19 00:18:07.145 iops : min= 246, max= 258, avg=252.95, stdev= 4.13, samples=19 00:18:07.145 lat (msec) : 20=100.00% 00:18:07.145 cpu : usr=92.42%, sys=7.02%, ctx=23, majf=0, minf=9 00:18:07.145 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.145 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.145 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:07.145 00:18:07.145 Run status group 0 (all jobs): 00:18:07.145 READ: bw=94.9MiB/s (99.5MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=950MiB (996MB), run=10004-10007msec 00:18:07.145 07:45:30 -- target/dif.sh@132 -- # destroy_subsystems 0 00:18:07.145 07:45:30 -- target/dif.sh@43 -- # local sub 00:18:07.145 07:45:30 -- target/dif.sh@45 -- # for sub in "$@" 00:18:07.145 07:45:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:07.145 07:45:30 -- target/dif.sh@36 -- # local sub_id=0 00:18:07.145 07:45:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:07.145 07:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.145 07:45:30 -- common/autotest_common.sh@10 -- # set +x 00:18:07.145 07:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.145 07:45:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:07.145 07:45:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.145 07:45:30 -- common/autotest_common.sh@10 -- # set +x 00:18:07.145 ************************************ 00:18:07.145 END TEST fio_dif_digest 00:18:07.145 ************************************ 00:18:07.145 07:45:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.145 00:18:07.145 real 0m10.896s 00:18:07.145 user 0m28.307s 00:18:07.145 sys 0m2.355s 00:18:07.145 07:45:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:07.145 07:45:30 -- common/autotest_common.sh@10 -- # set +x 00:18:07.145 07:45:30 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:07.145 07:45:30 -- target/dif.sh@147 -- # nvmftestfini 00:18:07.145 07:45:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:07.145 07:45:30 -- nvmf/common.sh@116 -- # sync 00:18:07.145 07:45:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:07.145 07:45:30 -- nvmf/common.sh@119 -- # set +e 00:18:07.145 07:45:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:07.145 07:45:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:07.145 rmmod nvme_tcp 00:18:07.145 rmmod nvme_fabrics 00:18:07.145 rmmod nvme_keyring 00:18:07.145 07:45:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:07.145 07:45:30 -- nvmf/common.sh@123 -- # set -e 00:18:07.145 07:45:30 -- nvmf/common.sh@124 -- # return 0 00:18:07.145 07:45:30 -- nvmf/common.sh@477 -- # '[' -n 74345 ']' 00:18:07.145 07:45:30 -- nvmf/common.sh@478 -- # killprocess 74345 00:18:07.145 07:45:30 -- common/autotest_common.sh@936 -- # '[' -z 74345 ']' 00:18:07.145 07:45:30 -- common/autotest_common.sh@940 -- # kill -0 74345 00:18:07.145 07:45:30 -- common/autotest_common.sh@941 -- # uname 00:18:07.145 07:45:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.145 07:45:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74345 00:18:07.145 killing process with pid 74345 00:18:07.145 07:45:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:07.145 07:45:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:07.145 07:45:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74345' 00:18:07.145 07:45:30 -- common/autotest_common.sh@955 -- # kill 74345 00:18:07.145 07:45:30 -- common/autotest_common.sh@960 -- # wait 74345 00:18:07.145 07:45:31 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:18:07.145 07:45:31 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:07.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.145 Waiting for block devices as requested 00:18:07.145 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:18:07.145 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:18:07.145 07:45:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.145 07:45:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.145 07:45:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.145 07:45:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.145 07:45:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.145 07:45:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:07.145 07:45:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.145 07:45:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:07.145 00:18:07.145 real 0m59.007s 00:18:07.145 user 3m47.799s 00:18:07.145 sys 0m18.484s 00:18:07.145 ************************************ 00:18:07.145 END TEST nvmf_dif 00:18:07.145 ************************************ 00:18:07.145 07:45:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:07.145 07:45:31 -- common/autotest_common.sh@10 -- # set +x 00:18:07.145 07:45:31 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:18:07.145 07:45:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:07.145 07:45:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.145 07:45:31 -- common/autotest_common.sh@10 -- # set +x 00:18:07.145 ************************************ 00:18:07.145 START TEST nvmf_abort_qd_sizes 00:18:07.145 ************************************ 00:18:07.145 07:45:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:18:07.145 * Looking for test storage... 00:18:07.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.146 07:45:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:07.146 07:45:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:07.146 07:45:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:07.146 07:45:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:07.146 07:45:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:07.146 07:45:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:07.146 07:45:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:07.146 07:45:31 -- scripts/common.sh@335 -- # IFS=.-: 00:18:07.146 07:45:31 -- scripts/common.sh@335 -- # read -ra ver1 00:18:07.146 07:45:31 -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.146 07:45:31 -- scripts/common.sh@336 -- # read -ra ver2 00:18:07.146 07:45:31 -- scripts/common.sh@337 -- # local 'op=<' 00:18:07.146 07:45:31 -- scripts/common.sh@339 -- # ver1_l=2 00:18:07.146 07:45:31 -- scripts/common.sh@340 -- # ver2_l=1 00:18:07.146 07:45:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:07.146 07:45:31 -- scripts/common.sh@343 -- # case "$op" in 00:18:07.146 07:45:31 -- scripts/common.sh@344 -- # : 1 00:18:07.146 07:45:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:07.146 07:45:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.146 07:45:31 -- scripts/common.sh@364 -- # decimal 1 00:18:07.146 07:45:31 -- scripts/common.sh@352 -- # local d=1 00:18:07.146 07:45:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.146 07:45:31 -- scripts/common.sh@354 -- # echo 1 00:18:07.146 07:45:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:07.146 07:45:31 -- scripts/common.sh@365 -- # decimal 2 00:18:07.146 07:45:31 -- scripts/common.sh@352 -- # local d=2 00:18:07.146 07:45:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.146 07:45:31 -- scripts/common.sh@354 -- # echo 2 00:18:07.146 07:45:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:07.146 07:45:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:07.146 07:45:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:07.146 07:45:31 -- scripts/common.sh@367 -- # return 0 00:18:07.146 07:45:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.146 07:45:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.146 --rc genhtml_branch_coverage=1 00:18:07.146 --rc genhtml_function_coverage=1 00:18:07.146 --rc genhtml_legend=1 00:18:07.146 --rc geninfo_all_blocks=1 00:18:07.146 --rc geninfo_unexecuted_blocks=1 00:18:07.146 00:18:07.146 ' 00:18:07.146 07:45:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.146 --rc genhtml_branch_coverage=1 00:18:07.146 --rc genhtml_function_coverage=1 00:18:07.146 --rc genhtml_legend=1 00:18:07.146 --rc geninfo_all_blocks=1 00:18:07.146 --rc geninfo_unexecuted_blocks=1 00:18:07.146 00:18:07.146 ' 00:18:07.146 07:45:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.146 --rc genhtml_branch_coverage=1 00:18:07.146 --rc genhtml_function_coverage=1 00:18:07.146 --rc genhtml_legend=1 00:18:07.146 --rc geninfo_all_blocks=1 00:18:07.146 --rc geninfo_unexecuted_blocks=1 00:18:07.146 00:18:07.146 ' 00:18:07.146 07:45:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.146 --rc genhtml_branch_coverage=1 00:18:07.146 --rc genhtml_function_coverage=1 00:18:07.146 --rc genhtml_legend=1 00:18:07.146 --rc geninfo_all_blocks=1 00:18:07.146 --rc geninfo_unexecuted_blocks=1 00:18:07.146 00:18:07.146 ' 00:18:07.146 07:45:31 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.146 07:45:31 -- nvmf/common.sh@7 -- # uname -s 00:18:07.146 07:45:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.146 07:45:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.146 07:45:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.146 07:45:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.146 07:45:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.146 07:45:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.146 07:45:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.146 07:45:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.146 07:45:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.146 07:45:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.146 07:45:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a 00:18:07.146 07:45:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=a5868676-2bf9-4edd-881a-97dc92ed874a 00:18:07.146 07:45:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.146 07:45:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.146 07:45:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.146 07:45:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.146 07:45:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.146 07:45:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.146 07:45:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.146 07:45:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.146 07:45:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.146 07:45:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.146 07:45:31 -- paths/export.sh@5 -- # export PATH 00:18:07.146 07:45:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.146 07:45:31 -- nvmf/common.sh@46 -- # : 0 00:18:07.146 07:45:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:07.146 07:45:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:07.146 07:45:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:07.146 07:45:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.146 07:45:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.146 07:45:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:07.146 07:45:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:07.146 07:45:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:07.146 07:45:31 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:18:07.146 07:45:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:07.146 07:45:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.146 07:45:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:07.146 07:45:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:07.146 07:45:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:07.146 07:45:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.146 07:45:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:07.146 07:45:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.146 07:45:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:07.146 07:45:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:07.146 07:45:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:07.146 07:45:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:07.146 07:45:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:07.146 07:45:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:07.146 07:45:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.146 07:45:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.146 07:45:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.146 07:45:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:07.146 07:45:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.146 07:45:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.146 07:45:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.146 07:45:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.146 07:45:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.146 07:45:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.146 07:45:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.146 07:45:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.146 07:45:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:07.146 07:45:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:07.146 Cannot find device "nvmf_tgt_br" 00:18:07.146 07:45:31 -- nvmf/common.sh@154 -- # true 00:18:07.146 07:45:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.146 Cannot find device "nvmf_tgt_br2" 00:18:07.146 07:45:31 -- nvmf/common.sh@155 -- # true 00:18:07.146 07:45:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:07.146 07:45:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:07.146 Cannot find device "nvmf_tgt_br" 00:18:07.146 07:45:31 -- nvmf/common.sh@157 -- # true 00:18:07.146 07:45:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:07.146 Cannot find device "nvmf_tgt_br2" 00:18:07.146 07:45:32 -- nvmf/common.sh@158 -- # true 00:18:07.146 07:45:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:07.146 07:45:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:07.146 07:45:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.146 07:45:32 -- nvmf/common.sh@161 -- # true 00:18:07.146 07:45:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.147 07:45:32 -- nvmf/common.sh@162 -- # true 00:18:07.147 07:45:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.147 07:45:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.147 07:45:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.147 07:45:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.147 07:45:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.147 07:45:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.147 07:45:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.147 07:45:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:07.147 07:45:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:07.147 07:45:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:07.147 07:45:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:07.147 07:45:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:07.147 07:45:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:07.147 07:45:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.147 07:45:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.147 07:45:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.147 07:45:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:07.147 07:45:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:07.147 07:45:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.147 07:45:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.147 07:45:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.147 07:45:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.147 07:45:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.147 07:45:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:07.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:18:07.147 00:18:07.147 --- 10.0.0.2 ping statistics --- 00:18:07.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.147 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:18:07.147 07:45:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:07.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:07.147 00:18:07.147 --- 10.0.0.3 ping statistics --- 00:18:07.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.147 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:07.147 07:45:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:07.147 00:18:07.147 --- 10.0.0.1 ping statistics --- 00:18:07.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.147 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:07.147 07:45:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.147 07:45:32 -- nvmf/common.sh@421 -- # return 0 00:18:07.147 07:45:32 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:07.147 07:45:32 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:07.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.406 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.666 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.666 07:45:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.666 07:45:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:07.666 07:45:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:07.666 07:45:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.666 07:45:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:07.666 07:45:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:07.666 07:45:33 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:18:07.666 07:45:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:07.666 07:45:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.666 07:45:33 -- common/autotest_common.sh@10 -- # set +x 00:18:07.666 07:45:33 -- nvmf/common.sh@469 -- # nvmfpid=75708 00:18:07.666 07:45:33 -- nvmf/common.sh@470 -- # waitforlisten 75708 00:18:07.666 07:45:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:18:07.666 07:45:33 -- common/autotest_common.sh@829 -- # '[' -z 75708 ']' 00:18:07.666 07:45:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.666 07:45:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.666 07:45:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.666 07:45:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.666 07:45:33 -- common/autotest_common.sh@10 -- # set +x 00:18:07.667 [2024-12-02 07:45:33.196376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:07.667 [2024-12-02 07:45:33.197161] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.926 [2024-12-02 07:45:33.336438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.926 [2024-12-02 07:45:33.407416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:07.926 [2024-12-02 07:45:33.407580] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.926 [2024-12-02 07:45:33.407597] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.926 [2024-12-02 07:45:33.407609] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.926 [2024-12-02 07:45:33.407772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.926 [2024-12-02 07:45:33.409043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.926 [2024-12-02 07:45:33.409154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.926 [2024-12-02 07:45:33.409171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.863 07:45:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.863 07:45:34 -- common/autotest_common.sh@862 -- # return 0 00:18:08.863 07:45:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:08.863 07:45:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.863 07:45:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.863 07:45:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.863 07:45:34 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:18:08.863 07:45:34 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:18:08.863 07:45:34 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:18:08.863 07:45:34 -- scripts/common.sh@311 -- # local bdf bdfs 00:18:08.863 07:45:34 -- scripts/common.sh@312 -- # local nvmes 00:18:08.863 07:45:34 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:18:08.863 07:45:34 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:08.863 07:45:34 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:18:08.863 07:45:34 -- scripts/common.sh@297 -- # local bdf= 00:18:08.863 07:45:34 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:18:08.863 07:45:34 -- scripts/common.sh@232 -- # local class 00:18:08.863 07:45:34 -- scripts/common.sh@233 -- # local subclass 00:18:08.863 07:45:34 -- scripts/common.sh@234 -- # local progif 00:18:08.863 07:45:34 -- scripts/common.sh@235 -- # printf %02x 1 00:18:08.863 07:45:34 -- scripts/common.sh@235 -- # class=01 00:18:08.863 07:45:34 -- scripts/common.sh@236 -- # printf %02x 8 00:18:08.863 07:45:34 -- scripts/common.sh@236 -- # subclass=08 00:18:08.863 07:45:34 -- scripts/common.sh@237 -- # printf %02x 2 00:18:08.863 07:45:34 -- scripts/common.sh@237 -- # progif=02 00:18:08.863 07:45:34 -- scripts/common.sh@239 -- # hash lspci 00:18:08.863 07:45:34 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:18:08.863 07:45:34 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:18:08.863 07:45:34 -- scripts/common.sh@242 -- # grep -i -- -p02 00:18:08.863 07:45:34 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:08.863 07:45:34 -- scripts/common.sh@244 -- # tr -d '"' 00:18:08.863 07:45:34 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:08.863 07:45:34 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:18:08.863 07:45:34 -- scripts/common.sh@15 -- # local i 00:18:08.863 07:45:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:18:08.863 07:45:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:08.863 07:45:34 -- scripts/common.sh@24 -- # return 0 00:18:08.863 07:45:34 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:18:08.863 07:45:34 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:08.864 07:45:34 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:18:08.864 07:45:34 -- scripts/common.sh@15 -- # local i 00:18:08.864 07:45:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:18:08.864 07:45:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:08.864 07:45:34 -- scripts/common.sh@24 -- # return 0 00:18:08.864 07:45:34 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:18:08.864 07:45:34 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:08.864 07:45:34 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:18:08.864 07:45:34 -- scripts/common.sh@322 -- # uname -s 00:18:08.864 07:45:34 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:08.864 07:45:34 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:08.864 07:45:34 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:08.864 07:45:34 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:18:08.864 07:45:34 -- scripts/common.sh@322 -- # uname -s 00:18:08.864 07:45:34 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:08.864 07:45:34 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:08.864 07:45:34 -- scripts/common.sh@327 -- # (( 2 )) 00:18:08.864 07:45:34 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:18:08.864 07:45:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:08.864 07:45:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.864 07:45:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.864 ************************************ 00:18:08.864 START TEST spdk_target_abort 00:18:08.864 ************************************ 00:18:08.864 07:45:34 -- common/autotest_common.sh@1114 -- # spdk_target 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:18:08.864 07:45:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.864 07:45:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.864 spdk_targetn1 00:18:08.864 07:45:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.864 07:45:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.864 07:45:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.864 [2024-12-02 07:45:34.372156] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.864 07:45:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:18:08.864 07:45:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.864 07:45:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.864 07:45:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:18:08.864 07:45:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.864 07:45:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.864 07:45:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:18:08.864 07:45:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.864 07:45:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.864 [2024-12-02 07:45:34.400287] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.864 07:45:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@24 -- # local target r 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:08.864 07:45:34 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:12.151 Initializing NVMe Controllers 00:18:12.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:12.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:12.151 Initialization complete. Launching workers. 00:18:12.152 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9993, failed: 0 00:18:12.152 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1084, failed to submit 8909 00:18:12.152 success 852, unsuccess 232, failed 0 00:18:12.152 07:45:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:12.152 07:45:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:15.441 Initializing NVMe Controllers 00:18:15.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:15.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:15.441 Initialization complete. Launching workers. 00:18:15.441 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8928, failed: 0 00:18:15.441 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1130, failed to submit 7798 00:18:15.441 success 397, unsuccess 733, failed 0 00:18:15.441 07:45:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:15.441 07:45:40 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:18.727 Initializing NVMe Controllers 00:18:18.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:18.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:18.727 Initialization complete. Launching workers. 00:18:18.727 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31869, failed: 0 00:18:18.727 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2331, failed to submit 29538 00:18:18.727 success 508, unsuccess 1823, failed 0 00:18:18.727 07:45:44 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:18:18.727 07:45:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.727 07:45:44 -- common/autotest_common.sh@10 -- # set +x 00:18:18.727 07:45:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.727 07:45:44 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:18:18.727 07:45:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.727 07:45:44 -- common/autotest_common.sh@10 -- # set +x 00:18:18.986 07:45:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.986 07:45:44 -- target/abort_qd_sizes.sh@62 -- # killprocess 75708 00:18:18.986 07:45:44 -- common/autotest_common.sh@936 -- # '[' -z 75708 ']' 00:18:18.986 07:45:44 -- common/autotest_common.sh@940 -- # kill -0 75708 00:18:18.986 07:45:44 -- common/autotest_common.sh@941 -- # uname 00:18:18.986 07:45:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.986 07:45:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75708 00:18:18.986 07:45:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.986 killing process with pid 75708 00:18:18.986 07:45:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.986 07:45:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75708' 00:18:18.986 07:45:44 -- common/autotest_common.sh@955 -- # kill 75708 00:18:18.986 07:45:44 -- common/autotest_common.sh@960 -- # wait 75708 00:18:19.245 00:18:19.245 real 0m10.405s 00:18:19.245 user 0m42.456s 00:18:19.245 sys 0m1.981s 00:18:19.245 ************************************ 00:18:19.245 END TEST spdk_target_abort 00:18:19.245 ************************************ 00:18:19.245 07:45:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:19.245 07:45:44 -- common/autotest_common.sh@10 -- # set +x 00:18:19.245 07:45:44 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:18:19.245 07:45:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:19.245 07:45:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.245 07:45:44 -- common/autotest_common.sh@10 -- # set +x 00:18:19.245 ************************************ 00:18:19.245 START TEST kernel_target_abort 00:18:19.245 ************************************ 00:18:19.245 07:45:44 -- common/autotest_common.sh@1114 -- # kernel_target 00:18:19.245 07:45:44 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:18:19.245 07:45:44 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:18:19.245 07:45:44 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:18:19.245 07:45:44 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:18:19.245 07:45:44 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:18:19.245 07:45:44 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:19.245 07:45:44 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:19.245 07:45:44 -- nvmf/common.sh@627 -- # local block nvme 00:18:19.245 07:45:44 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:18:19.245 07:45:44 -- nvmf/common.sh@630 -- # modprobe nvmet 00:18:19.245 07:45:44 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:19.245 07:45:44 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:19.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.762 Waiting for block devices as requested 00:18:19.762 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.762 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.762 07:45:45 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:19.762 07:45:45 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:19.762 07:45:45 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:18:19.762 07:45:45 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:18:19.762 07:45:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:20.021 No valid GPT data, bailing 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # pt= 00:18:20.021 07:45:45 -- scripts/common.sh@394 -- # return 1 00:18:20.021 07:45:45 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:18:20.021 07:45:45 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:20.021 07:45:45 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:20.021 07:45:45 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:18:20.021 07:45:45 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:18:20.021 07:45:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:20.021 No valid GPT data, bailing 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # pt= 00:18:20.021 07:45:45 -- scripts/common.sh@394 -- # return 1 00:18:20.021 07:45:45 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:18:20.021 07:45:45 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:20.021 07:45:45 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:18:20.021 07:45:45 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:18:20.021 07:45:45 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:18:20.021 07:45:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:18:20.021 No valid GPT data, bailing 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # pt= 00:18:20.021 07:45:45 -- scripts/common.sh@394 -- # return 1 00:18:20.021 07:45:45 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:18:20.021 07:45:45 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:20.021 07:45:45 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:18:20.021 07:45:45 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:18:20.021 07:45:45 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:18:20.021 07:45:45 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:18:20.021 No valid GPT data, bailing 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:18:20.021 07:45:45 -- scripts/common.sh@393 -- # pt= 00:18:20.021 07:45:45 -- scripts/common.sh@394 -- # return 1 00:18:20.021 07:45:45 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:18:20.021 07:45:45 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:18:20.021 07:45:45 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:18:20.021 07:45:45 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:20.021 07:45:45 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:20.021 07:45:45 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:18:20.021 07:45:45 -- nvmf/common.sh@654 -- # echo 1 00:18:20.021 07:45:45 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:18:20.021 07:45:45 -- nvmf/common.sh@656 -- # echo 1 00:18:20.021 07:45:45 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:18:20.021 07:45:45 -- nvmf/common.sh@663 -- # echo tcp 00:18:20.021 07:45:45 -- nvmf/common.sh@664 -- # echo 4420 00:18:20.021 07:45:45 -- nvmf/common.sh@665 -- # echo ipv4 00:18:20.021 07:45:45 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:20.278 07:45:45 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a5868676-2bf9-4edd-881a-97dc92ed874a --hostid=a5868676-2bf9-4edd-881a-97dc92ed874a -a 10.0.0.1 -t tcp -s 4420 00:18:20.278 00:18:20.278 Discovery Log Number of Records 2, Generation counter 2 00:18:20.278 =====Discovery Log Entry 0====== 00:18:20.278 trtype: tcp 00:18:20.278 adrfam: ipv4 00:18:20.278 subtype: current discovery subsystem 00:18:20.278 treq: not specified, sq flow control disable supported 00:18:20.278 portid: 1 00:18:20.278 trsvcid: 4420 00:18:20.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:20.278 traddr: 10.0.0.1 00:18:20.278 eflags: none 00:18:20.278 sectype: none 00:18:20.278 =====Discovery Log Entry 1====== 00:18:20.278 trtype: tcp 00:18:20.278 adrfam: ipv4 00:18:20.278 subtype: nvme subsystem 00:18:20.278 treq: not specified, sq flow control disable supported 00:18:20.278 portid: 1 00:18:20.278 trsvcid: 4420 00:18:20.278 subnqn: kernel_target 00:18:20.278 traddr: 10.0.0.1 00:18:20.278 eflags: none 00:18:20.278 sectype: none 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@24 -- # local target r 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:20.278 07:45:45 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:20.279 07:45:45 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:23.568 Initializing NVMe Controllers 00:18:23.568 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:18:23.568 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:18:23.568 Initialization complete. Launching workers. 00:18:23.568 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 32269, failed: 0 00:18:23.568 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 32269, failed to submit 0 00:18:23.568 success 0, unsuccess 32269, failed 0 00:18:23.568 07:45:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:23.568 07:45:48 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:26.860 Initializing NVMe Controllers 00:18:26.860 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:18:26.860 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:18:26.860 Initialization complete. Launching workers. 00:18:26.860 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64263, failed: 0 00:18:26.860 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 25907, failed to submit 38356 00:18:26.860 success 0, unsuccess 25907, failed 0 00:18:26.860 07:45:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:26.860 07:45:52 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:30.148 Initializing NVMe Controllers 00:18:30.148 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:18:30.148 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:18:30.148 Initialization complete. Launching workers. 00:18:30.148 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 69825, failed: 0 00:18:30.148 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17446, failed to submit 52379 00:18:30.148 success 0, unsuccess 17446, failed 0 00:18:30.148 07:45:55 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:18:30.148 07:45:55 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:18:30.148 07:45:55 -- nvmf/common.sh@677 -- # echo 0 00:18:30.148 07:45:55 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:18:30.148 07:45:55 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:30.148 07:45:55 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:30.148 07:45:55 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:18:30.148 07:45:55 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:18:30.148 07:45:55 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:18:30.148 ************************************ 00:18:30.148 END TEST kernel_target_abort 00:18:30.148 ************************************ 00:18:30.148 00:18:30.148 real 0m10.500s 00:18:30.148 user 0m5.207s 00:18:30.148 sys 0m2.616s 00:18:30.148 07:45:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:30.148 07:45:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.148 07:45:55 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:18:30.148 07:45:55 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:18:30.148 07:45:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:30.148 07:45:55 -- nvmf/common.sh@116 -- # sync 00:18:30.148 07:45:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:30.148 07:45:55 -- nvmf/common.sh@119 -- # set +e 00:18:30.148 07:45:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:30.148 07:45:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:30.148 rmmod nvme_tcp 00:18:30.148 rmmod nvme_fabrics 00:18:30.148 rmmod nvme_keyring 00:18:30.148 07:45:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:30.149 07:45:55 -- nvmf/common.sh@123 -- # set -e 00:18:30.149 07:45:55 -- nvmf/common.sh@124 -- # return 0 00:18:30.149 07:45:55 -- nvmf/common.sh@477 -- # '[' -n 75708 ']' 00:18:30.149 07:45:55 -- nvmf/common.sh@478 -- # killprocess 75708 00:18:30.149 07:45:55 -- common/autotest_common.sh@936 -- # '[' -z 75708 ']' 00:18:30.149 07:45:55 -- common/autotest_common.sh@940 -- # kill -0 75708 00:18:30.149 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75708) - No such process 00:18:30.149 Process with pid 75708 is not found 00:18:30.149 07:45:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75708 is not found' 00:18:30.149 07:45:55 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:18:30.149 07:45:55 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:30.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:30.716 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:18:30.716 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:18:30.716 07:45:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:30.716 07:45:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:30.716 07:45:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.716 07:45:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:30.716 07:45:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.716 07:45:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:30.716 07:45:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.716 07:45:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:30.716 ************************************ 00:18:30.716 END TEST nvmf_abort_qd_sizes 00:18:30.716 ************************************ 00:18:30.716 00:18:30.716 real 0m24.471s 00:18:30.716 user 0m49.110s 00:18:30.716 sys 0m5.964s 00:18:30.716 07:45:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:30.716 07:45:56 -- common/autotest_common.sh@10 -- # set +x 00:18:30.716 07:45:56 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:30.716 07:45:56 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:18:30.716 07:45:56 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:18:30.716 07:45:56 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:18:30.716 07:45:56 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:18:30.716 07:45:56 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:18:30.716 07:45:56 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:18:30.716 07:45:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:30.716 07:45:56 -- common/autotest_common.sh@10 -- # set +x 00:18:30.716 07:45:56 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:18:30.716 07:45:56 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:18:30.716 07:45:56 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:18:30.716 07:45:56 -- common/autotest_common.sh@10 -- # set +x 00:18:32.623 INFO: APP EXITING 00:18:32.623 INFO: killing all VMs 00:18:32.623 INFO: killing vhost app 00:18:32.623 INFO: EXIT DONE 00:18:33.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:33.192 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:18:33.192 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:18:34.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:34.128 Cleaning 00:18:34.128 Removing: /var/run/dpdk/spdk0/config 00:18:34.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:34.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:34.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:34.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:34.128 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:34.128 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:34.128 Removing: /var/run/dpdk/spdk1/config 00:18:34.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:18:34.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:18:34.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:18:34.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:18:34.128 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:18:34.128 Removing: /var/run/dpdk/spdk1/hugepage_info 00:18:34.128 Removing: /var/run/dpdk/spdk2/config 00:18:34.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:18:34.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:18:34.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:18:34.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:18:34.128 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:18:34.128 Removing: /var/run/dpdk/spdk2/hugepage_info 00:18:34.128 Removing: /var/run/dpdk/spdk3/config 00:18:34.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:18:34.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:18:34.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:18:34.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:18:34.128 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:18:34.128 Removing: /var/run/dpdk/spdk3/hugepage_info 00:18:34.128 Removing: /var/run/dpdk/spdk4/config 00:18:34.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:18:34.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:18:34.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:18:34.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:18:34.128 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:18:34.128 Removing: /var/run/dpdk/spdk4/hugepage_info 00:18:34.128 Removing: /dev/shm/nvmf_trace.0 00:18:34.128 Removing: /dev/shm/spdk_tgt_trace.pid53794 00:18:34.128 Removing: /var/run/dpdk/spdk0 00:18:34.128 Removing: /var/run/dpdk/spdk1 00:18:34.128 Removing: /var/run/dpdk/spdk2 00:18:34.128 Removing: /var/run/dpdk/spdk3 00:18:34.128 Removing: /var/run/dpdk/spdk4 00:18:34.128 Removing: /var/run/dpdk/spdk_pid53646 00:18:34.128 Removing: /var/run/dpdk/spdk_pid53794 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54049 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54238 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54386 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54463 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54540 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54633 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54717 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54750 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54790 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54854 00:18:34.128 Removing: /var/run/dpdk/spdk_pid54937 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55377 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55423 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55474 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55490 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55546 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55562 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55624 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55640 00:18:34.128 Removing: /var/run/dpdk/spdk_pid55680 00:18:34.387 Removing: /var/run/dpdk/spdk_pid55698 00:18:34.387 Removing: /var/run/dpdk/spdk_pid55738 00:18:34.387 Removing: /var/run/dpdk/spdk_pid55756 00:18:34.387 Removing: /var/run/dpdk/spdk_pid55893 00:18:34.387 Removing: /var/run/dpdk/spdk_pid55923 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56010 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56056 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56075 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56139 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56153 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56193 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56207 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56236 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56256 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56290 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56304 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56339 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56358 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56387 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56407 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56441 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56455 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56490 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56509 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56538 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56558 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56592 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56606 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56641 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56656 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56695 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56709 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56738 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56763 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56792 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56806 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56846 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56860 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56889 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56914 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56943 00:18:34.387 Removing: /var/run/dpdk/spdk_pid56960 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57003 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57020 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57058 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57077 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57106 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57126 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57163 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57235 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57322 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57654 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57670 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57702 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57715 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57728 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57747 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57761 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57774 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57788 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57805 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57813 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57831 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57849 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57857 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57875 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57893 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57901 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57919 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57937 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57945 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57980 00:18:34.387 Removing: /var/run/dpdk/spdk_pid57987 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58020 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58091 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58112 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58116 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58150 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58154 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58167 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58202 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58219 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58240 00:18:34.387 Removing: /var/run/dpdk/spdk_pid58248 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58255 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58257 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58270 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58272 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58280 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58289 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58316 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58342 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58346 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58380 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58384 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58392 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58432 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58444 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58470 00:18:34.646 Removing: /var/run/dpdk/spdk_pid58478 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58485 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58487 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58495 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58502 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58510 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58517 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58593 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58635 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58741 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58767 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58811 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58831 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58851 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58860 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58895 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58904 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58980 00:18:34.647 Removing: /var/run/dpdk/spdk_pid58994 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59049 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59116 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59161 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59183 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59282 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59322 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59354 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59577 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59669 00:18:34.647 Removing: /var/run/dpdk/spdk_pid59697 00:18:34.647 Removing: /var/run/dpdk/spdk_pid60028 00:18:34.647 Removing: /var/run/dpdk/spdk_pid60066 00:18:34.647 Removing: /var/run/dpdk/spdk_pid60370 00:18:34.647 Removing: /var/run/dpdk/spdk_pid60786 00:18:34.647 Removing: /var/run/dpdk/spdk_pid61049 00:18:34.647 Removing: /var/run/dpdk/spdk_pid61796 00:18:34.647 Removing: /var/run/dpdk/spdk_pid62625 00:18:34.647 Removing: /var/run/dpdk/spdk_pid62738 00:18:34.647 Removing: /var/run/dpdk/spdk_pid62800 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64078 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64291 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64610 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64720 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64853 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64875 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64903 00:18:34.647 Removing: /var/run/dpdk/spdk_pid64930 00:18:34.647 Removing: /var/run/dpdk/spdk_pid65014 00:18:34.647 Removing: /var/run/dpdk/spdk_pid65149 00:18:34.647 Removing: /var/run/dpdk/spdk_pid65293 00:18:34.647 Removing: /var/run/dpdk/spdk_pid65368 00:18:34.647 Removing: /var/run/dpdk/spdk_pid65763 00:18:34.647 Removing: /var/run/dpdk/spdk_pid66112 00:18:34.647 Removing: /var/run/dpdk/spdk_pid66120 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68338 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68340 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68623 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68641 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68662 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68687 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68692 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68776 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68784 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68892 00:18:34.647 Removing: /var/run/dpdk/spdk_pid68894 00:18:34.906 Removing: /var/run/dpdk/spdk_pid69008 00:18:34.906 Removing: /var/run/dpdk/spdk_pid69010 00:18:34.906 Removing: /var/run/dpdk/spdk_pid69417 00:18:34.906 Removing: /var/run/dpdk/spdk_pid69466 00:18:34.906 Removing: /var/run/dpdk/spdk_pid69569 00:18:34.906 Removing: /var/run/dpdk/spdk_pid69648 00:18:34.906 Removing: /var/run/dpdk/spdk_pid69958 00:18:34.906 Removing: /var/run/dpdk/spdk_pid70161 00:18:34.906 Removing: /var/run/dpdk/spdk_pid70541 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71074 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71506 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71563 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71610 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71668 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71770 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71826 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71886 00:18:34.906 Removing: /var/run/dpdk/spdk_pid71941 00:18:34.906 Removing: /var/run/dpdk/spdk_pid72272 00:18:34.906 Removing: /var/run/dpdk/spdk_pid73452 00:18:34.906 Removing: /var/run/dpdk/spdk_pid73598 00:18:34.906 Removing: /var/run/dpdk/spdk_pid73846 00:18:34.906 Removing: /var/run/dpdk/spdk_pid74402 00:18:34.906 Removing: /var/run/dpdk/spdk_pid74562 00:18:34.906 Removing: /var/run/dpdk/spdk_pid74724 00:18:34.906 Removing: /var/run/dpdk/spdk_pid74821 00:18:34.906 Removing: /var/run/dpdk/spdk_pid74993 00:18:34.906 Removing: /var/run/dpdk/spdk_pid75096 00:18:34.906 Removing: /var/run/dpdk/spdk_pid75765 00:18:34.906 Removing: /var/run/dpdk/spdk_pid75801 00:18:34.906 Removing: /var/run/dpdk/spdk_pid75836 00:18:34.906 Removing: /var/run/dpdk/spdk_pid76086 00:18:34.906 Removing: /var/run/dpdk/spdk_pid76116 00:18:34.906 Removing: /var/run/dpdk/spdk_pid76151 00:18:34.906 Clean 00:18:34.906 killing process with pid 48048 00:18:34.906 killing process with pid 48052 00:18:34.906 07:46:00 -- common/autotest_common.sh@1446 -- # return 0 00:18:34.906 07:46:00 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:18:34.906 07:46:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:34.906 07:46:00 -- common/autotest_common.sh@10 -- # set +x 00:18:35.165 07:46:00 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:18:35.165 07:46:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.165 07:46:00 -- common/autotest_common.sh@10 -- # set +x 00:18:35.165 07:46:00 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:35.165 07:46:00 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:35.165 07:46:00 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:35.165 07:46:00 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:18:35.165 07:46:00 -- spdk/autotest.sh@383 -- # hostname 00:18:35.165 07:46:00 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:35.424 geninfo: WARNING: invalid characters removed from testname! 00:18:57.371 07:46:21 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:59.275 07:46:24 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:01.178 07:46:26 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:03.716 07:46:28 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:05.623 07:46:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:08.160 07:46:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:10.062 07:46:35 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:10.321 07:46:35 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:19:10.321 07:46:35 -- common/autotest_common.sh@1690 -- $ lcov --version 00:19:10.321 07:46:35 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:19:10.321 07:46:35 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:19:10.321 07:46:35 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:19:10.321 07:46:35 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:19:10.321 07:46:35 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:19:10.321 07:46:35 -- scripts/common.sh@335 -- $ IFS=.-: 00:19:10.321 07:46:35 -- scripts/common.sh@335 -- $ read -ra ver1 00:19:10.321 07:46:35 -- scripts/common.sh@336 -- $ IFS=.-: 00:19:10.321 07:46:35 -- scripts/common.sh@336 -- $ read -ra ver2 00:19:10.321 07:46:35 -- scripts/common.sh@337 -- $ local 'op=<' 00:19:10.321 07:46:35 -- scripts/common.sh@339 -- $ ver1_l=2 00:19:10.321 07:46:35 -- scripts/common.sh@340 -- $ ver2_l=1 00:19:10.321 07:46:35 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:19:10.321 07:46:35 -- scripts/common.sh@343 -- $ case "$op" in 00:19:10.321 07:46:35 -- scripts/common.sh@344 -- $ : 1 00:19:10.321 07:46:35 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:19:10.321 07:46:35 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.321 07:46:35 -- scripts/common.sh@364 -- $ decimal 1 00:19:10.321 07:46:35 -- scripts/common.sh@352 -- $ local d=1 00:19:10.321 07:46:35 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:19:10.321 07:46:35 -- scripts/common.sh@354 -- $ echo 1 00:19:10.321 07:46:35 -- scripts/common.sh@364 -- $ ver1[v]=1 00:19:10.321 07:46:35 -- scripts/common.sh@365 -- $ decimal 2 00:19:10.321 07:46:35 -- scripts/common.sh@352 -- $ local d=2 00:19:10.321 07:46:35 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:19:10.321 07:46:35 -- scripts/common.sh@354 -- $ echo 2 00:19:10.321 07:46:35 -- scripts/common.sh@365 -- $ ver2[v]=2 00:19:10.321 07:46:35 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:19:10.321 07:46:35 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:19:10.321 07:46:35 -- scripts/common.sh@367 -- $ return 0 00:19:10.321 07:46:35 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.321 07:46:35 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:19:10.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.321 --rc genhtml_branch_coverage=1 00:19:10.321 --rc genhtml_function_coverage=1 00:19:10.321 --rc genhtml_legend=1 00:19:10.321 --rc geninfo_all_blocks=1 00:19:10.321 --rc geninfo_unexecuted_blocks=1 00:19:10.321 00:19:10.321 ' 00:19:10.321 07:46:35 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:19:10.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.321 --rc genhtml_branch_coverage=1 00:19:10.321 --rc genhtml_function_coverage=1 00:19:10.321 --rc genhtml_legend=1 00:19:10.321 --rc geninfo_all_blocks=1 00:19:10.321 --rc geninfo_unexecuted_blocks=1 00:19:10.321 00:19:10.321 ' 00:19:10.321 07:46:35 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:19:10.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.321 --rc genhtml_branch_coverage=1 00:19:10.321 --rc genhtml_function_coverage=1 00:19:10.321 --rc genhtml_legend=1 00:19:10.321 --rc geninfo_all_blocks=1 00:19:10.321 --rc geninfo_unexecuted_blocks=1 00:19:10.321 00:19:10.321 ' 00:19:10.321 07:46:35 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:19:10.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.321 --rc genhtml_branch_coverage=1 00:19:10.321 --rc genhtml_function_coverage=1 00:19:10.321 --rc genhtml_legend=1 00:19:10.321 --rc geninfo_all_blocks=1 00:19:10.321 --rc geninfo_unexecuted_blocks=1 00:19:10.321 00:19:10.321 ' 00:19:10.321 07:46:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.321 07:46:35 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:10.321 07:46:35 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.321 07:46:35 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.321 07:46:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.321 07:46:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.322 07:46:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.322 07:46:35 -- paths/export.sh@5 -- $ export PATH 00:19:10.322 07:46:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.322 07:46:35 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:10.322 07:46:35 -- common/autobuild_common.sh@440 -- $ date +%s 00:19:10.322 07:46:35 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733125595.XXXXXX 00:19:10.322 07:46:35 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733125595.gcfk68 00:19:10.322 07:46:35 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:19:10.322 07:46:35 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:19:10.322 07:46:35 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:10.322 07:46:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:10.322 07:46:35 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:10.322 07:46:35 -- common/autobuild_common.sh@456 -- $ get_config_params 00:19:10.322 07:46:35 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:19:10.322 07:46:35 -- common/autotest_common.sh@10 -- $ set +x 00:19:10.322 07:46:35 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:19:10.322 07:46:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:10.322 07:46:35 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:10.322 07:46:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:10.322 07:46:35 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:19:10.322 07:46:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:19:10.322 07:46:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:19:10.322 07:46:35 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:10.322 07:46:35 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:19:10.322 07:46:35 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:10.322 07:46:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:19:10.322 + [[ -n 5244 ]] 00:19:10.322 + sudo kill 5244 00:19:10.588 [Pipeline] } 00:19:10.605 [Pipeline] // timeout 00:19:10.610 [Pipeline] } 00:19:10.628 [Pipeline] // stage 00:19:10.633 [Pipeline] } 00:19:10.647 [Pipeline] // catchError 00:19:10.655 [Pipeline] stage 00:19:10.657 [Pipeline] { (Stop VM) 00:19:10.667 [Pipeline] sh 00:19:10.947 + vagrant halt 00:19:14.231 ==> default: Halting domain... 00:19:20.812 [Pipeline] sh 00:19:21.091 + vagrant destroy -f 00:19:23.650 ==> default: Removing domain... 00:19:23.921 [Pipeline] sh 00:19:24.200 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:19:24.209 [Pipeline] } 00:19:24.224 [Pipeline] // stage 00:19:24.230 [Pipeline] } 00:19:24.245 [Pipeline] // dir 00:19:24.250 [Pipeline] } 00:19:24.266 [Pipeline] // wrap 00:19:24.272 [Pipeline] } 00:19:24.285 [Pipeline] // catchError 00:19:24.295 [Pipeline] stage 00:19:24.297 [Pipeline] { (Epilogue) 00:19:24.312 [Pipeline] sh 00:19:24.594 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:29.960 [Pipeline] catchError 00:19:29.963 [Pipeline] { 00:19:29.974 [Pipeline] sh 00:19:30.252 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:30.252 Artifacts sizes are good 00:19:30.260 [Pipeline] } 00:19:30.271 [Pipeline] // catchError 00:19:30.280 [Pipeline] archiveArtifacts 00:19:30.286 Archiving artifacts 00:19:30.407 [Pipeline] cleanWs 00:19:30.417 [WS-CLEANUP] Deleting project workspace... 00:19:30.417 [WS-CLEANUP] Deferred wipeout is used... 00:19:30.422 [WS-CLEANUP] done 00:19:30.423 [Pipeline] } 00:19:30.437 [Pipeline] // stage 00:19:30.442 [Pipeline] } 00:19:30.479 [Pipeline] // node 00:19:30.484 [Pipeline] End of Pipeline 00:19:30.517 Finished: SUCCESS